I want to build GCC from source as a cross compiler for ARM bare-metal targets. Specifically, I need support for the architectures armv4t and armv5te with softfp for both of them since they lack a proper FPU.
The (relevant) flags I used are
--with-cpu=arm946e-s --with-mode=arm --with-float=soft --enable-interwork --enable-multilib --with-multilib-list=armv4t,armv5te
This way I tried to achieve that the compiler will default to armv5te with the --with-cpu option and still maintain the opportunity to build for armv4t.
Binutils build worked fine, however when building gcc's subdir gcc the multilib check failed with:
For arm946e-s real value is arm946e-s
Error: --with-multilib-list=armv4t,armv5te not supported.
make: *** [Makefile:4356: configure-gcc] Error 1
I looked up on how to enable armv5te support since armv4t with the arm7tdmi seems to be a default multilib target but found no results. There doesn't even seem to be a proper list of valid multilib targets. Removing the multilib list yielded a build of armv4t and armv7-a multilibs which I don't need to support.
How can I build both targets successfully with softfp?
TL-DR; There is no need. Use a wrapper and link to the desired standard libraries.
For soft and hard float, both the gcc libraries and generated code are effected. For a system with armv4 and armv5, the same compiler will always be able to generate the same code. Moreover, the generated objects are the same ABI. Ie, they pass parameters using the same mechanism. So the assembler instruction with-in an object may target armv4 or armv5. If you are on the armv5 architecture you may even link and run the armv4 objects. There is no issue except the code is sub-optimal.
You may build gcc libraries twice with the armv4 and armv5 options. Save the generated libraries. Headers will be identical. When you build armv5, use the armv5 libraries. Use -mcpu, -isystem (if you like) and -L to get the armv5 optimized libraries. For this reason I would use the armv4 build as the default. For certain this can be done with -freestanding and a gcc-armv5 wrapper script.
The multilib configuration is special in that the actual compiler binary can generate two sets of prologue and epilogue. Even before multilib it was always possible to generate either armv4 or armv5 code using the same compiler with -mcpu, or -march and -mtune. It is just they had the same prologue and epilogue. The instruction generation backend in gcc has always been multicpu. Probably multilib would better be named multiabi.
Related
The Intel Classic Compiler had a compilation flag -ax which I'd use to generate additional code paths for multiple instructions sets, such as AVX512, which was very convenient as I'd only build and receive a single binary.
With the next-generation intel compiler the documentation omits this flag from the full list of compiler flags, but still makes reference to it in other sections (including the porting guide). When I attempt to use it with versions 2022.0.0 or 2022.0.1, ICPX tells me the flag is unrecognized:
$ icpx -axCORE-AVX512
icpx: command line warning #10430: Unsupported command line options encountered
These options as listed are not supported.
For more information, use '-qnextgen-diag'.
option list:
-axCORE-AVX512
Is there any way to recover the old behaviour with the new compiler? Otherwise it looks like I will have to quintuple my CI time spent building and deploying to ship 5 binaries instead of one, not to mention that users will also now need to know what instructions their CPU supports.
Several static analysis tools designed for C/C++ exist, but they are not particularly useful for testing CUDA sources.
Since clang version 6 is able to compile CUDA, I wanted to check what are my options with using clang-tidy, which does not seem to have option for switching architectures.
Is there a way to make it work? For example compile time switch for turning on CUDA parser, extension in form of custom check, or is it maybe planned feature?
One of the problem with the clang-based tools is that they are not parsing the files in exactly the same way as clang does.
The first problem is that unlike C/C++ compilation, CUDA compilation compiles the source multiple times. By default clang creates multiple compilation jobs when you give it a CUDA file and that trips many tools that expect only one compilation. In order to work that around you need to pass --cuda-host-only option to clang-tidy.
You may also need to pass --cuda-path=/path/to/your/CUDA/install/root so clang can find CUDA headers.
Another problem you may run into would be related to include paths. Clang-derived tools do not have the same default include paths that clang itself uses and that occasionally causes weird problems. At the very least clang-tidy needs to find __clang_cuda_runtime_wrapper.h which is installed along with clang. If you run clang-tidy your-file.c -- -v it will print clang's arguments and include search paths it uses. Compare that to what clang -x c /dev/null -fsyntax-only -vprints. You may need to give clang-tidy extra include paths to match those used by clang itself. Note that you should not explicitly add the path to the CUDA includes here. It will be added in the right place automatically by --cuda-path=....
Once you have it all in place, clang-tidy should work on CUDA files.
Something like this:
clang-tidy your-file.cu -- --cuda-host-only --cuda-path=... -isystem /clang/includes -isystem /extra/system/includes
Is there any way to compile GCC's libstdc++ with hash style SYSV instead of GNU/Linux? I have a toolchain (via crosstool-ng) that I use to compile our company library to work with a very wide range of Linux systems.
One of these system is a very old RedHat that have only SYSV hash style, when I compile a C only library/program with the toolchain, it works great since the generated binary uses SYSV.
But, when I link with libstdc++, the binary automatically changes to GNU/Linux style, the reason is because libstdc++ was built as GNU/Linux, hence the question.
Running the binary in this system gives me the error
ELF file OS ABI invalid
Just for completeness, I have already tried -Wl,--hash-style=sysv, without success.
Also, I have another toolchain for ARM system which have the same version of GCC, GLIBC, etc, but in this toolchain libstdc++ uses SYSV, dunno why.
Thanks in advance!
Try to rebuild your GCC with --disable-gnu-unique-object configure option. According to documentation on GCC configure options:
--enable-gnu-unique-object
--disable-gnu-unique-object
Tells GCC to use the gnu_unique_object relocation for C++ template static data members and inline function local statics. Enabled by default for a toolchain with an assembler that accepts it and GLIBC 2.11 or above, otherwise disabled.
Using gnu_unique_object may lead to GNU ABI in your final executable, which is not supported in old Red Hat.
I found this really interesting article: C/C++ tip: How to detect the compiler name and version using compiler predefined macros
Is it possible to detect by using a Macro if the current compiler is Cross GCC (the one used as default with Eclipse CDT)?
It is easy to detect, if you are compiling e.g. for ARM, but it is not possible to detect by macro, if you are compiling for ARM on ARM or cros-compiling on x86.
You need support in build system for this and pass your variable to compiler
gcc -DIS_CROSSCOMPILING=1
Using this GCC dump preprocessor defines check yourself output of cross compiler and system compiler. There are lot of defines but nothing about cross compilation.
According to this http://www.gnu.org/software/automake/manual/html_node/Cross_002dCompilation.html
autotools are performing some check
checking whether we are cross compiling... yes
and I hope that this result can be made visible for gcc.
You can also run some ugly command to check some sort of cross compilation
gcc -march=native -E - < /dev/null
this command will fail, if it is cross compiler to different architecture, but it will not fail, if it is just for different operating system.
Yes it is. #if defined(GNUC) is probably what you are looking for to get the compiler. If you want to see whether a particular target is used, then there might be another macro for the hardware.
https://gcc.gnu.org/onlinedocs/cpp/Common-Predefined-Macros.html
I would like to compile software using the autotools build system to LLVM bitcode; that is, I would like the executables obtained at the end to be LLVM bitcode, not actual machine code.
(The goal is to be able to run LLVM bitcode analysis tools on the whole program.)
I've tried specifying CC="clang -emit-llvm -use-gold-plugins" and variants to the configure script, to no avail. There is always something going wrong (e.g. the package builds .a static libraries, which are refused by the linker).
It seems to me that the correct way to do it would be that LLVM bitcode should be a cross-compilation target. to be set with --host=, but there is no such standard target (even though there is a target for Knuth's MMIX).
So far I've used kludges, such as compiling with CC="clang -emit-llvm -use-gold-plugins" and running linking lines (using llvm-ld or llvm-link) manually. This works for simple packages such as grep.
I would like a method that's robust and works with most, if not all, configure scripts, including when there are intermediate .a files, or intermediate targets.
There are some methods like this. But for simple builds where intermediate static libraries are not used, then you can do something simpler. The list of things you will need are
llvm, configured with gold plugin support. Refer to this
clang
dragonegg, if you need front-end for fortran, go, etc.
The key is to enable '-flto' for either clang or dragonegg(front-end), both at compile time and link time. It is straightforward for clang:
CC = clang
CLINKER = clang
CFLAGS = -flto -c
CLINKFLAGS = -flto -Wl,-plugin-opt=also-emit-llvm
If needed, add additional '-plugin-opt' option to specify llvm-specific codegen option:
-Wl,-plugin-opt=also-emit-llvm,-plugin-opt=-disable-fp-elim
The dumped whole problem bytecode would be sitting along with your final executable.
Two additional things are needed when using dragonegg.
First, the dragonegg is not aware of the location of llvm gold plugin, it needs to be specified in the linker flags like this -Wl,-plugin=/path/to/LLVMgold.so,-plugin-opt=...
Second, dragonegg is only able to dump IR rather than bytecode. You need a wrapper script for that purpose. I created one here. Works fine for me.