Can nvcc generate an older PTX ISA version [duplicate] - c++

I've recently downloaded and successfully compiled a small CUDA dll using NVCC (10.2). Unfortunately because I have the most recent toolkit version the distribution requires the most recent driver version too. So I was wondering if there was an NVCC flag that enabled me to effectively target an earlier driver version and then distribute with an older runtime.
Currently, I have to check the run time and driver versions in order to check for compatibility.

The CUDA toolchain, runtime API and its support libraries are versioned and if you build runtime API code with a given toolkit version, you must ship the resulting code with all the libraries from that version or have users install that toolkit version (aka the tensorflow problem).
If you use the driver API, then you can potentially target a lower compute capability with PTX which might be backward compatible with a different driver. I say might because there are still PTX version support limits which can stop it from working correctly.
If you want to support older CUDA versions, just install the older toolchain and build using that toolkit.

Related

bazel cross compile tensorflow with opencl support

I am able to build tensorflow for arm platform using cross toolchain as described in
bazel cross compile
But Tensorflow supports building with OpenCL support when we run './configure' from tensorflow home Directory where we get below questions
Do you wish to build TensorFlow with OpenCL support? [y/N]: y
OpenCL support will be enabled for TensorFlow.
Please specify which C++ compiler should be used as the host C++
compiler. [Default is /usr/bin/g++]: Please specify which C compiler
should be used as the host C compiler. [Default is /usr/bin/gcc]:
Please specify the location where ComputeCpp for SYCL 1.2 is
installed. [Default is /usr/local/computecpp]:
I would like to know how to enable opencl when building tensorflow for cross toolchain. I suspect the above configurations would get overridden by the configurations specified in cross toolchain
Actual Requirement : To cross-build Tensorflow using --config=sycl for arm 32 bit system
Things Done
Able to cross-build Tensorflow without --config=sycl for arm 32 bit system using below command
bazel build --crosstool_top=//armv7l-compiler:toolchain
--cpu=armeabi-v7a tensorflow:libtensorflow_cc.so --cxxopt="-std=c++11"
Result : Built Fine without errors
2 cross-build Tensorflow without --config=sycl for arm 32 bit system using below command
bazel build -c opt --copt="-mfpu=neon" --crosstool_top=//armv7l-compiler:toolchain --cpu=armeabi-v7a tensorflow:libtensorflow_cc.so --cxxopt="-std=c++11" --config=sycl
Result
Build Failed at Last moment while trying to link libComputeCpp.so
I have downloaded the sdk from codeplaysoftware website for Linux16.04 and arm64 bit. Can I know if there is 32 bit version
I basically wanted to improve the speed of TensorFlow on embedded 32 bit arm device by using opencl or sycl. Could you also please suggest any other optimization flags to be used while building.
Thanks

Relationship between GLIBCXX(libstdc++.so.6) and gcc version

[Situation]
I'm developing a c++ library. I had a issue with the GLIBCXX version.
Before, I developed on version machine GLIBCXX_3.4.22.
But my library is not working on the target machine which has GLIBCXX_3.4.19.
So, I downgraded gcc version from 5.2.x to 4.8.x and GLIBCXX version from 3.4.22 to 3.4.19.
It successfully ran on the target machine.
But my development machine(ubuntu) boot fails because other libraries can't find GLIBCXX 3.4.22 version which is already linked to that version.
So, I re-installed GLIBCXX 3.4.22 but gcc version is still 4.8.5.
[Question]
Does my library compiled on gcc-4.8.5 not use GLIBCXX_3.4.22 version? Is it fine to develop on this environment (gcc 4.8.5, GLIBCXX_3.4.22)?
What is the relationship between gcc(compile) version and GLIBCXX(GLIBC) version on the linux machine.
Where can I check the correct version compatibility mapping information between gcc and GLIBCXX(GLIBC)?
libstdc++ (which is where the GLIBCXX_* versioned symbols are from) has been ABI compatible from GCC 3.4.0 to now.
Your library will use the latest symbol versions from libstdc++ at the time of compile. The compiler version does not matter, except
GCC 5.1 added C++11 support with a "dual ABI" in libstdc++. Code compiled in C++11 mode will use symbols with the [abi:cxx11] tag, and that may not interoperate with code compiled without C++11 (whether from an older compiler without C++11 support, or a newer compiler set to use an older standard).
https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html
If you need to build binaries that run on older systems, I would recommend setting up an older distribution inside a container and building in there, with all old libraries. This way, nothing newer from your development system can leak into them.

Glibc vs GCC vs binutils compatibility

Is there a sort of official documentation about version compatibility between binutils, glibc and GCC? I found this matrix for binutils vs GCC version compatibility. It would be good to have something like this for GCC vs glibc as well.
The point I'm asking this for is that I need to know if I can build, say, cross GCC 4.9.2 with "embedded" glibc 2.2.4 to be able to support quite old targets like CentOS 5.
Thank you.
it's extremely unlikely you'll be able to build such an old version of glibc with such a new version of gcc. glibc documents the min required version of binutils & gcc in its INSTALL file.
glibc-2.23 states:
Recommended Tools for Compilation
GCC 4.7 or newer
GNU 'binutils' 2.22 or later
typically if you want to go newer than those, glibc will generally work with the version of gcc that was in development at the time of the release. e.g. glibc-2.23 was released 18 Feb 2016 and gcc-6 was under development at that time, so glibc-2.23 will work with gcc-4.7 through gcc-6.
so find the version of gcc you want, then find its release date, then look at the glibc releases from around the same time.
all that said, using an old version of glibc is a terrible idea. it will be full of known security vulnerabilities (include remotely exploitable ones). the latest glibc-2.23 release for example fixed CVE-2015-7547 which affects any application doing DNS network resolution and affects versions starting with glibc-2.9. remember: this is not the only bug lurking.
When building a cross-compiler there are at least two, and sometimes three, platform types to consider:
Platform A is used to BUILD a cross compiler HOSTED on Platform B which TARGETS binaries for embedded Platform C. I used the words BUILD, HOSTED, and TARGETS intentionally, as those are the options passed to configure when building a cross-GCC.
BUILD PLATFORM: Platform of machine which will create the cross-GCC
HOST PLATFORM: Platform of machine which will use the cross-GCC to create binaries
TARGET PLATFORM: Platform of machine which will
run the binaries created by the cross-GCC
Consider the following (Canadian Cross Config, BUILD != HOST platform):
A 32-bit x86 Windows PC running the mingw32 toolchain will be used to compile a cross-GCC. This cross-GCC will be used on 64-bit x86 Linux computers. The binaries created by the cross-GCC should run on a 32-bit PowerPC single-board-computer running LynxOS 178 RtOS (Realtime Operating System).
In the above scenario, our platforms are as follows:
BUILD: i686-w32-mingw32
HOST: x86_64-linux-gnu
TARGET: powerpc-lynx-lynxos178
However, this is not the typical configuration. Most often BUILD PLATFORM and HOST PLATFORM are the same.
A more typical scenario (Regular Cross Config, BUILD == HOST platform):
A 64-bit x86 Linux server will be used to compile a cross-GCC. This cross-GCC will also be used on 64-bit x86 Linux computers. The binaries created by the cross-GCC should run on a 32-bit PowerPC single-board-computer running LynxOS 178 RtOS (Realtime Operating System).
In the above scenario, our platforms are as follows:
BUILD: x86_64-linux-gnu
HOST: x86_64-linux-gnu
TARGET: powerpc-lynx-lynxos178
When building the cross-GCC (assuming we are building a Regular Cross Config, where BUILD == HOST Platform), native versions of GNU BinUtils, GCC, glibc, and libstdc++ (among other libraries) will be required to actually compile the cross-GCC. It is less about specific versions of each component, and more about whether each component supports the specific language features required to compile GCC-4.9.2 (note: just because GCC-4.9.2 implements language feature X, does not mean that language feature X must be supported by the version of GCC used to compile GCC-4.9.2. In the same way, just because glibc-X.X.X implements library feature Y, does not mean that the version of GCC used to compile glibc-X.X.X must have been linked against a glibc that implements feature Y.
In your case, you should simply build your cross-GCC 4.9.2 (or if you are not cross compiling, i.e. you are compiling for CentOS 5 on Linux, build native GCC 4.9.2), and then when you link your executable for CentOS 5, explicitly link glibc v2.2.4 using -l:libc.so.2.2.4. You also probably will need to define -std=c99 or -std=gnu99 when you compile, as I highly doubt glibc 2.2.4 supports the C 2011 standard.

Run my code on older version of linux

I am new to Linux programming and I wonder, is there a way to run (not recompile) my C++ executable on an older version of Linux of the same distribution?
Example: Say I compiled my code on RHEL 6 and want to run my executable on RHEL 4 or 5.
In Windows when we do this we just install the C++ runtime of the compiler version of C++.
Example: If I use VS2012 to build a C++ project using C++11, I just need to install the C++ runtime of C++11 on the client machine to run my application no matter what version of Windows I am using (of course starting from Windows XP)
The by far easiest way is to make use of the strong future compatibility of glibc and the GCC runtime libraries: compile your executable on the oldest OS you want it to run on, and it should work on anything later without recompiling (that is, some symlinks may be needed to satisfy the dependencies the executable loader is expecting).
In general it is best to compile it for each distribution you want to support, so no unexpected conflicts appear.
Actually, yes - Find your apps deps (using e.g: ldd) and copy them (e.g: libstdc++.so.6) from your build system to somewhere on your target system (e.g: /mylibs). Point your app here (e.g: using patchelf's --rpath and --interpreter). Your app should run (test it!). If not, it's likely that your glibc is incompatible with the older kernel. You can solve this by recompiling the required version of glibc to support the older kernel - using the --enable-kernel=<version> ./configure switch. If your required version of glibc doesn't support that kernel version then you can supply the missing functions in .so's and load them with LD_PRELOAD.

Can I target older linux with newer gcc/clang? C++

Right now I compile my C++ software on a certain old version of linux (SLED 10) using the provided gcc and it can run on most newer versions as they have a newer glibc. Problem is, that old gcc doesn't support C++11 and I'd really like to use the new features.
Now I have some ideas, but I'm sure others have the same need. What's actually worked for you?
Ideas:
Build on a newer system, static link to newer glibc. (Not possible, right?)
Build on a newer system, compile and link against an older glibc.
Build on an older system using an updated gcc, link against older glibc.
Build on a newer system, dynamic link to newer glibc, set RPath and provide our glibc with installer.
As a bonus, my software also support plugins and has an SDK. I'd really prefer that my customers could compile against my libraries without a huge hassle.
Thanks in advance. Ideas welcome, proven solutions preferred.
Build with the newer gcc. Either install the new compiler on the old machine or comile on your new machine and install the necessary dynamic libraries on the old machine.
Note that multiple versions of libc (and also libstdc++) are supported on a single machine since they are typically versioned (i.e. libc.so.5, libc.so.6, etc)