Forcing hipcc to use GPU architecture specific custom header libraries - c++

I have been trying to compile a rocm GPU program example using custom versions of rocprim and thrust supporting my GPU architecture which are located at:
/opt/rocm-5.4.0/myspecialrocm/include/thrust/
/opt/rocm-5.4.0/myspecialrocm/include/rocprim/
The compiler instructions I am using are:
hipcc --offload-arch=gfx1031 -I=/opt/rocm-5.4.0/myspecialrocm/include/thrust -I=/opt/rocm-5.4.0/myspecialrocm/include/rocprim discrete_voronoi.cu -o testvoronoi
Unfortunately the compiler uses the default installations of rocprim and thrust located at:
/opt/rocm-5.4.0/include/rocprim
/opt/rocm-5.4.0/include/thrust
When I compiled rocThrust, I used the parameter -DCMAKE_NO_SYSTEM_FROM_IMPORTED=TRUE which should enable the use the custom libraries but the compiler keeps using the default versions.
How can I overwrite the default paths at compile time?
(The example was correctly compiled when I installed the custom versions of rocprim and thrust including the examples and it works fine but I am not able to compile my own programs because I haven't found a way to enforce the use of my custom libraries.)

Related

How to set up a C++ toolchain for cross-compilation?

I have a target system Astra Linux Smolensk, which has access to very outdated packages with GCC 6 among them. It does not support C++17, which is required for some very useful 3rd party libraries and language features. At some point I am considering a possibility of using concepts from 20 standard. But there is no easy way to achieve that.
Basically, I see 2 ways:
Compile needed version of GCC by myself and have a custom version of Astra Linux for building with additional packages (which is not a good option since we have restrictions for system modification). I am about to try out this option, but it is not the subject of this question.
Cross-compile with latest GCC on Ubuntu using a toolchain.
So, what do I need in order to create a toolchain for a custom version of Linux? The only thing I am sure of is the Linux Core version. Can I use an existing Linux toolchain or do I have to export system libraries and create a custom toolchain?
Here I found out some tools that seem to be helpful. For instance:
Buildroot
Buildroot is a complete build system based on the Linux kernel configuration system and supports a wide range of target architectures. It generates root file system images ready to be written to flash. In addition to having a huge number of packages which can be compiled into the image, it also generates a cross toolchain to build those packages from source. Even if you don't want to use buildroot for your root filesystem, it is a useful tool for generating a toolchain. Buildroot supports uClibc-ng, glibc and musl.
I wonder if it does what I need and can I use a latest GCC compiler with those generated toolchains.
I found similar questions:
How to build C++17 application in old linux distro with old stdlib and libc?
https://askubuntu.com/questions/162465/are-gcc-versions-tied-to-kernel-versions
How can I link to a specific glibc version?
Some clarification is needed:
The project relies heavily on a lot of 3rd party dependencies from the target linux's package repository. Moreover, I use dynamic .so modules that may be loaded both implicitly and explicitly.
Today with docker and modern CI/CD pipelines based on container, we don't rely on process compile very often as old days.
With the help of musl, we can even create universal Linux binaries with static linkage: We use all static libraries rather than dynamic libraries. Then we ship one executable file.
According to musl's doc, it needs
Linux kernel >=2.6.39
This is a very old version released around 2011, so even old Linux distro's can run our binaries.
Musl is widely used in many projects, especially in Rust projects, we provide Musl builds for users as conveniences.
Note that we may need to fix our codebase when using Musl, there are very slight differences with GNU libc, which we should be aware.

Build a .cu file that uses boost

I ran the following command:
nvcc -arch=sm_70 foo.cu -o predatorPrey -I $BOOST_ROOT -L $BOOST_LIBRARY_PATH -lboost_timer
And got the following compilation error:
boost/include/boost/core/noncopyable.hpp(42): error: defaulted default constructor cannot be constexpr because the corresponding implicitly declared default constructor would not be constexpr
A google search led me here.
All hope seemed lost until this guy used a workaround.
Though, as a junior programmer, I don't understand what he means by
Built boost from source with g++11 open solved the problem
Does that mean rebuilding boost from scratch? How is it different from building boost by default?
So what are actually the workarounds to use both and CUDA in the same project?
For host code usage:
The only general workaround with a high probability of success when building a 3rd party library with the CUDA toolchain is to arrange your project in such a way that the 3rd party code is in a file that ends in .cpp and is processed by the host compiler (e.g. g++ on linux, cl.exe on windows).
Your CUDA code (e.g. kernels, etc.) will need to be in files with filenames ending in .cu (for default processing behavior).
If you need to use this 3rd party code/library functionality in your functions that are in the .cu file(s), you will need to build wrapper functions in your .cpp files to provide the necessary behavior as callable functions, then call these wrapper functions as needed from your .cu file(s).
Link all this together at the project level.
It may be that other approaches can be taken if the specific issue is analyzed. For example, sometimes updating to the latest version of the 3rd party library and/or CUDA version, may resolve the issue.
For usage in device code:
There is no general compatibility approach. If you expect some behavior to be usable in device code, and you run into a compile error like this, you will need to address the issue specifically.
General suggestions may still apply, such as update to the latest version of the 3rd party library you are using, and/or latest CUDA version.

How to build multiple executables with different compilers with cmake?

following situation:
At the moment, I have a C++/CUDA project and I use make to produce two different executables: The first one is only a CPU version, which ignores the CUDA parts, and the second one is the GPU version.
I can run make to build both versions, make cpu to only build the CPU version and make gpu to only build the GPU version.
For the CPU version, the Intel compiler is used , and for the GPU version, nvcc with g++ is used instead.
Why I consider using cmake:
Now I would also like to be able to build the project under Windows with nmake. Therefore, I thought that cmake was the appropriate solution to generate a make or nmake file based on the platform.
Problem:
However, it seems to be difficult to specify a compiler based on a target (Using CMake with multiple compilers for the same language).
Question:
Which is the best way to achieve the behavior as described above, building different executables for different architectures with different compilers out of the same code base as well for Windows as for Linux, by using cmake?
In your situation you can just use cmake's CUDA support: FindCUDA.
Your cmake lists would then look like:
find_package(CUDA)
add_executable(cpu ${SRC_LIST})
CUDA_ADD_EXECUTABLE(gpu ${SRC_LIST})

Compile with boost to use whatever boost version is available?

I've compiled a Linux package on ubuntu 12.04 which uses boost and on this system i have boost 1.46. I tried to run the compiled release on another system and it complains that it can't find libboost_system.so.1.46.1. That system has boost 1.49 installed. How do I compile so that the program uses whatever version of boot exists instead of the specific version on the development machine.
You cannot expect your program to work with a different version of the library.
The fact that there are /different/ versions implies that they're /not the same/.
As mentioned, either
statically link to your specific version, or
you can ship the shared libraries (as long as you put them in a app-specific location and make sure you find them at runtime). Incidentally, see the second example here: How to compile boost async_client.cpp for the relevant linker options to use a custom library (it assumes the same location is to be used at runtime (rpath)

compile against libc++ statically

I wrote some custom c++ code and it works fine in ubuntu, but when I upload it to my server (which uses centos 5) its fails and says library is out of date. I googled all around and centos cannot use the latest libraries. How can I compile against the stl so that it is included in the binary and it doesn't matter that centos uses an old library?
P.S. I don't want to upload the source to the server and compile there.
In your linking step, you can simply add the "-static" flag to gcc:
http://gcc.gnu.org/onlinedocs/gcc-4.4.1/gcc/Link-Options.html#Link-Options
You may install on your Ubuntu box the compiler that fits the version of the library on your server.
You may ship your application with libstdc++.so taken from the system you compiled it at, provided you tune the linking so it gets loaded instead of centos' one.
You may compile it statically. To do this, you should switch your compiler from g++ to
gcc -lgcc_s -Wl,-Bstatic -lstdc++ -Wl,-Bdynamic
Choose whatever you like. Note that approaches (2) and (3) may arise the problem of dependencies: your project (particularly, the stdc++ implementation that, being statically linked, is now a part of your app) may require some functions to present in the system libraries on centos. If there are no such functions, your application won't start. The reason why it can happen is that ubuntu system you're compiling at is newer, and forward compatibility is not preserved in linux libraries.