Linker error when enabling Link Time Optimization in NDK - c++

When I add the flag -flto to my NDK C++ project the linker emits the following error: "Optimization level must be between 0 and 3", even though my optimization level is explicitly set to 3 via -O3.
Does anyone know how to solve this?
The compiler flags are passed via Gradle which, as I understand it, should pass the flags to both the Clang compiler and linker. When I remove the -flto flag everything works fine.
Notes:
I'm using NDK 19.2 (latest version at the time I write this).
I also get the warning "clang++.exe: warning: argument unused during compilation: '-Wa,--noexecstack' [-Wunused-command-line-argument]" which I do not have if I compile without link time optimizations.

Two parts to the answer:
The error is caused by https://github.com/android-ndk/ndk/issues/721. Clang's LTO plugin just doesn't accept -Os or -Oz. This is a bug.
Okay, I might be really stupid, I suppose between 0 and 3 means 1 or 2 :)
It's actually because you can't use the generic cppFlags to set optimization levels. That corresponds to CMAKE_CXX_FLAGS, and you need to set these in CMAKE_CXX_FLAGS_DEBUG and CMAKE_CXX_FLAGS_RELEASE (and/or the C flavors of those). CMake has its own defaults in those variables and the command line is built as ${CMAKE_CXX_FLAGS} ${CMAKE_CXX_FLAGS_RELEASE}, so your -O3 is being overridden by the default.

I also encounter this linking error but I fixed through a different way.
I had cross-compile a library for android with architectures of both arm64 and armv7. It's all OK for arm64 but encounter the linking error for armv7. And I found it can be fixed by commented out the following statements in my CMakeLists.txt:
if (${CMAKE_MAJOR_VERSION} GREATER_EQUAL 3 AND ${CMAKE_MINOR_VERSION} GREATER_EQUAL 9)
cmake_policy(SET CMP0069 NEW)
set(CMAKE_POLICY_DEFAULT_CMP0069 NEW)
include(CheckIPOSupported)
check_ipo_supported(RESULT ipo_supported OUTPUT ipo_supported_output)
if (ipo_supported)
set(CMAKE_INTERPROCEDURAL_OPTIMIZATION TRUE)
endif ()
endif ()
The above CMake statements are related to the IPO setting and it seems cause the linking error when cross-compiling for android armv7.
I don't know why the IPO should not enabled for android armv7.

Related

Clang: What exactly does the "-stdlib" flag do?

[Mac OS 10.13.2, Xcode9.2]
Clang has a flag -stdlib which, according to both clang++ -cc1 --help (same as clang -cc1 --help) and the LLVM documentation page, allows specification of the C++ standard library to use.
1) How does this flag impact on compilation? I.e. does it change the order of library include paths etc.
2) How does this flag impact linking? I.e. is it just a short-cut/alternative for supplying -lc++?
I am really interested in understanding the details of this flag because I can't find any documentation describing it's precise behaviour and it is causing havoc with our build system since the Xcode9 upgrade. Inclusion of -stdlib=libc++ in our Makefile causes the compilation to fail due to headers problems, yet, when -stdlib=libc++ is omitted, our projects compile fine (presumably because libc++ is the Mac OS default Standard C++ library). The project link against libc++ due to other linker flags -lc++ and -lsupc++.
Some background info about our use-case
We are using Clang to cross-compile to a -march=i686 -target i686-linux-elf target. Prior to the Xcode9 update, our build system was working fine. Since the upgrade we're getting compiler errors, such as:
/usr/local/our-target/sysroot/usr/local/include/c++/v1/stdlib.h:111:82: error: use of undeclared identifier 'labs'; did you mean 'abs'?
inline _LIBCPP_INLINE_VISIBILITY long abs( long __x) _NOEXCEPT {return labs(__x);}
I've now been able to fix this problem by changing the header include paths. Namely I have removed a path reference to the folder that is parent to both the libc++ AND gcc4.8.5 includes.
# -I${STAGING.nao}/usr/local/include/c++ \
-I${STAGING.nao}/usr/local/include/c++/v1
I am still very interested in understanding the details of what the flag does.

Cannot use -std=c++11 and -l/-L options at the same time in Eclipse Neon.3

I am trying to work through this tutorial on OpenCL, on a Windows 10 dev system which has integrated Intel HD graphics. I have installed Intel's OpenCL SDK. I have added the include directory from the SDK install into Properties > C/C++ General > Paths and Symbols > Includes. I am using MinGW as my compiler for Eclipse
In response to a number of linker errors that popped up when I first tried to compile the project, I set up the linker in eclipse to point to opencl.lib as outlined in this answer.
That took care of the linker errors, but there's an offending line from the tutorial which makes it impossible for the tutorial boiler-plate to compile:
87 cl_int result = program.build({ device }, "");
Set up as I am, this gives me the following warning and error:
..\src\main.cpp:93:32: warning: extended initializer lists only available with -std=c++11 or -std=gnu++11
..\src\main.cpp:93:45: error: no matching function for call to 'cl::Program::build(<brace-enclosed initializer list>, const char [1])'
If I'm reading this correctly (I haven't used C++ since before C++11 was a thing), the compiler is first warning me that it doesn't properly recognize what {device} is supposed to be (a vector of devices which has only one entry in it, initialized earlier in the code). Then, since it doesn't recognize {device}, the compiler errors out because it can't find a signature for cl::Program::build with arguments that match whatever-the-heck it's interpreting {device} to be.
Following the warning's advice, I followed the instructions given in this answer to add the -std=c++11 option for the compiler. However, when I do that the linker errors come back. Trying to compile with these options results in about thirty errors which all basically say they can't find any reference for the CL calls in the library files. For example:
C:/Program Files (x86)/Intel/OpenCL SDK/6.3/include/CL/cl.hpp:1753: undefined reference to `clGetPlatformInfo#20'
How do I make the compiler behave? I think I remember reading somewhere that the order of compiler options in the command line matters withe regards to linking, could that be messing up my compile since I added the -std=c++11 option?
I (sort of) figured out why the compiler was unhappy--the library I was linking was the x64 library for OpenCL installed in [base Intel dir]\OpenCL SDK\6.3\lib\x64, but (I think?) my compiler is not set up to create x64 apps. When I link to the .lib file in OpenCL SDK\6.3\lib\x86 my linker errors disappeared.

-mimplicit-it compiler flag not recognized

I am attempting to compile a C++ library for a Tegra TK1. The library links to TBB, which I pulled using the package manager. During compilation I got the following error
/tmp/cc4iLbKz.s: Assembler messages:
/tmp/cc4iLbKz.s:9541: Error: thumb conditional instruction should be in IT block -- `strexeq r2,r3,[r4]'
A bit of googling and this question led me to try adding -mimplicit-it=thumb to CMAKE_CXX_FLAGS, but the compiler doesn't recognize it.
I am compiling on the tegra with kernal 3.10.40-grinch-21.3.4, and using gcc 4.8.4 compiler (thats what comes back when I type c++ -v)
I'm not sure what the initial error message means, though I think it has something to do with the TBB linked library rather than the source I'm compiling. The problem with the fix is also mysterious. Can anyone shed some light on this?
-mimplicit-it is an option to the assembler, not to the compiler. Thus, in the absence of specific assembler flags in your makefile (which you probably don't have, given that you don't appear to be using a separate assembler step), you'll need to use the -Wa option to the compiler to pass it through, i.e. -Wa,-mimplicit-it=thumb.
The source of the issue is almost certainly some inline assembly - possibly from a static inline in a header file if you're really only linking pre-built libraries - which contains conditionally-executed instructions (I'm going to guess its something like a cmpxchg implementation). Since your toolchain could well be configured to compile to the Thumb instruction set - which requires a preceding it (If-Then) instruction to set up conditional instructions - by default, another alternative might be to just compile with -marm (and/or remove -mthumb if appropriate) and sidestep the issue by not using Thumb at all.
Adding compiler option:
-wa
should solve the problem.

How to use nvcc as compiler in ns3

I'm trying to use cuda in ns3, but when I tried to run CXX="nvcc" ./waf configure, it shows the following message on the screen:
Checking for 'g++' (C++ compiler) : not found
Checking for 'clang++' (C++ compiler) : not found
Checking for 'icpc' (C++ compiler) : not found
could not configure a C++ compiler!
(complete log in /home/kelu/workspace/ns-3.24/build/config.log)
I checked the config.log, it says the following:
Checking for 'g++' (C++ compiler)
find program='nvcc' paths=['/usr/local/sbin', '/usr/local/bin', '/usr/sbin', '/usr/bin', '/sbin', '/bin', '/usr/local/cuda/bin'] var='CXX' -> ['nvcc']
from /home/kelu/workspace/ns-3.24: Could not determine the compiler type
not found
----------------------------------------
Checking for 'clang++' (C++ compiler)
find program='nvcc' paths=['/usr/local/sbin', '/usr/local/bin', '/usr/sbin', '/usr/bin', '/sbin', '/bin', '/usr/local/cuda/bin'] var='CXX' -> ['nvcc']
from /home/kelu/workspace/ns-3.24: Not clang/clang++
not found
----------------------------------------
Checking for 'icpc' (C++ compiler)
find program='nvcc' paths=['/usr/local/sbin', '/usr/local/bin', '/usr/sbin', '/usr/bin', '/sbin', '/bin', '/usr/local/cuda/bin'] var='CXX' -> ['nvcc']
from /home/kelu/workspace/ns-3.24: Not icc/icpc
not found
from /home/kelu/workspace/ns-3.24: could not configure a C++ compiler!
nvcc is located in /usr/local/cuda/bin, which is in path. But it seems to me that the building script of ns3 does not resolve nvcc as a compiler.
Could anybody please tell me the right way to make nvcc the CXX compiler in ns3?
Thanks.
Your problem probably was that Waf actually checks the compiler's built-in #defines to check whether a compiler invoked as "gcc" actually is gcc or not. As a concrete example, it will error out if it detects that the compiler is Intel's icc (because it #defined __INTEL_COMPILER) but was invoked with a "gcc" command line!
The code that does the identification is at https://waf.io/apidocs/_modules/waflib/Tools/c_config.html#get_cc_version .
So, if you don't have a compiler which tries hard to look like one of the supported ones, looks like you are supposed to write your own Waf tool.
However, you can try to hack your way through. For example, let's say that your compiler is compatible enough with gcc but still it doesn't get past Waf's absurdly stringent test. A fix is to run "waf configure" using the real gcc, and then edit the file where Waf stores the detection results, so at the build step Waf will actually run your compiler instead of gcc. You can do this by editing build/c4che/_cache.py: change the CC definition to your compiler's full path.
I haven't found a way to use nvcc in ns-3, but I did find a work around for this problem. I'm happy to share my solution here to help others:
Make your cuda code a static library:
ar rcs libcudacode.a a.o b.o c.o (you need to make the *.o files first using g++, nvcc, or anything else you want)
put libcudacode.a in /your/lib/folder/ and put your cuda code in /your/src/folder
Add lib folder and src folder into waf:
CXXFLAGS_EXTRA="-I/your/src/folder -I/your/cuda/dir/include" LINKFLAGS_EXTRA="-L/your/lib/folder -L/your/cuda/dir/lib64 -lcudacode -lcudart" ./waf configure.
./waf
Your code should be compiled now. You can access any public functions in your cuda code by #including "corresponding_header.h"
A little bit explanation:
CXXFLAGS_EXTRA and LINKFLAGS_EXTRA add compilation flags in ns-3's compiling system. You need to add both your cuda code and NVidia's cuda library to use the functions.
If you used any other libraries, also put them in CXXFLAGS_EXTRA and LINKFLAGS_EXTRA
Check the cuda directory name in your system. It's probably not lib64 in your machine.

MacPorts clang not using its own headers

I'm trying to get emscripten to work on OS X 10.8, see this post for some related issues there. Apparently the clang++ version shipped with Xcode is too old, so I got a recent clang 3.7.0 using MacPorts. I even told CMake to use that compiler (passing -DCMAKE_CXX_COMPILER=clang++-mp-3.7 on the command line), but it still fails:
[ 33%] Building CXX object CMakeFiles/optimizer.dir/parser.cpp.o
/opt/local/bin/clang++-mp-3.7 -std=c++11 -fno-exceptions -fno-rtti -O3 -DNDEBUG
-o CMakeFiles/optimizer.dir/parser.cpp.o
-c …/emsdk/emscripten/master/tools/optimizer/parser.cpp
In file included from …/emsdk/emscripten/master/tools/optimizer/parser.cpp:2:
In file included from …/emsdk/emscripten/master/tools/optimizer/parser.h:12:
…/emsdk/emscripten/master/tools/optimizer/istring.h:3:10: fatal error:
'unordered_set' file not found
#include <unordered_set>
^
1 error generated.
I can reproduce that issue by launching the compiler from the command line. In parallel build mode, sometimes it's instead complaining about <cstdint> for optimizer.cpp instead. Both these headers exist in /opt/local/libexec/llvm-3.7/include/c++/v1/.
What's the canonical way to use the macports-installed version of clang++ including its headers? Do I have to use -I and work out the full path, or is there something shorter?
Can I safely do so without also switching the runtime library to the one shipped with MacPorts' clang? If not, can I somehow encode the full path of the runtime library into the created binary, either for that single library or using the -rpath argument to ld or some equivalent alternative?
Update: I get unresolved symbols when I try to link stuff after specifying the include directory manually, and I don't know how to solve that. The libcxx package from MacPorts is empty except for a readme file.
I've solved the original problem by adding CXXFLAGS=--stdlib=libc++ to the environment. Then even the system version of clang will do everything I need. That flag works magic for MacPorts' version of clang as well: specifying that I get a successful build, and I can even verify (using the -E compiler switch) that it's using the headers I mentioned before. I'm still not certain whether there is anything to ensure that the headers match the system's version of libc++, though.