I found this really interesting article: C/C++ tip: How to detect the compiler name and version using compiler predefined macros
Is it possible to detect by using a Macro if the current compiler is Cross GCC (the one used as default with Eclipse CDT)?
It is easy to detect, if you are compiling e.g. for ARM, but it is not possible to detect by macro, if you are compiling for ARM on ARM or cros-compiling on x86.
You need support in build system for this and pass your variable to compiler
gcc -DIS_CROSSCOMPILING=1
Using this GCC dump preprocessor defines check yourself output of cross compiler and system compiler. There are lot of defines but nothing about cross compilation.
According to this http://www.gnu.org/software/automake/manual/html_node/Cross_002dCompilation.html
autotools are performing some check
checking whether we are cross compiling... yes
and I hope that this result can be made visible for gcc.
You can also run some ugly command to check some sort of cross compilation
gcc -march=native -E - < /dev/null
this command will fail, if it is cross compiler to different architecture, but it will not fail, if it is just for different operating system.
Yes it is. #if defined(GNUC) is probably what you are looking for to get the compiler. If you want to see whether a particular target is used, then there might be another macro for the hardware.
https://gcc.gnu.org/onlinedocs/cpp/Common-Predefined-Macros.html
Related
I want to build GCC from source as a cross compiler for ARM bare-metal targets. Specifically, I need support for the architectures armv4t and armv5te with softfp for both of them since they lack a proper FPU.
The (relevant) flags I used are
--with-cpu=arm946e-s --with-mode=arm --with-float=soft --enable-interwork --enable-multilib --with-multilib-list=armv4t,armv5te
This way I tried to achieve that the compiler will default to armv5te with the --with-cpu option and still maintain the opportunity to build for armv4t.
Binutils build worked fine, however when building gcc's subdir gcc the multilib check failed with:
For arm946e-s real value is arm946e-s
Error: --with-multilib-list=armv4t,armv5te not supported.
make: *** [Makefile:4356: configure-gcc] Error 1
I looked up on how to enable armv5te support since armv4t with the arm7tdmi seems to be a default multilib target but found no results. There doesn't even seem to be a proper list of valid multilib targets. Removing the multilib list yielded a build of armv4t and armv7-a multilibs which I don't need to support.
How can I build both targets successfully with softfp?
TL-DR; There is no need. Use a wrapper and link to the desired standard libraries.
For soft and hard float, both the gcc libraries and generated code are effected. For a system with armv4 and armv5, the same compiler will always be able to generate the same code. Moreover, the generated objects are the same ABI. Ie, they pass parameters using the same mechanism. So the assembler instruction with-in an object may target armv4 or armv5. If you are on the armv5 architecture you may even link and run the armv4 objects. There is no issue except the code is sub-optimal.
You may build gcc libraries twice with the armv4 and armv5 options. Save the generated libraries. Headers will be identical. When you build armv5, use the armv5 libraries. Use -mcpu, -isystem (if you like) and -L to get the armv5 optimized libraries. For this reason I would use the armv4 build as the default. For certain this can be done with -freestanding and a gcc-armv5 wrapper script.
The multilib configuration is special in that the actual compiler binary can generate two sets of prologue and epilogue. Even before multilib it was always possible to generate either armv4 or armv5 code using the same compiler with -mcpu, or -march and -mtune. It is just they had the same prologue and epilogue. The instruction generation backend in gcc has always been multicpu. Probably multilib would better be named multiabi.
How can you tell bazel to use a different C++ compiler on OS X?
bazel build --action_env CC=/path/to/compiler //:target
works on linux.
But -s shows that bazel always runs with external/local_config_cc/wrapped_clang (clang) on OSX regardless of what CC is.
CC correctly works only when you use the C++-only toolchain. If you have Xcode installed, bazel will detect this and automatically pick a different toolchain, the one that supports both C++ and ObjC. This toolchain can only use Xcode-provided clang.
This is unfortunate and I propose two solutions:
File a feature request for bazel to make it possible to select which toolchain is used. This will allow you to tell bazel that even though you have Xcode installed, you want to use C++ only toolchain with a custom compiler. This is quite simple and doable in a short time.
File a feature request for bazel to make it possible to select which compiler is used with C++/ObjC toolchain. I cannot comment on viability of this, I know next to nothing about osx, and I have no idea if it makes any sense to compile ObjC with a compiler that is not provided with Xcode...
Actually with the latest version of bazel specifying
BAZEL_USE_CPP_ONLY_TOOLCHAIN=1
build --action_env CC=/path/to/compiler [...]
does work, in the sense that the specified compiler is used. However there is still a problem with the compiler flags. If the compiler flags of the old compiler are incompatible with the new one, there is a problem. I still have to find out how to change compiler flags.
Use --crosstool_top.
See also --host_crosstool_top and --apple_crosstool_top.
Several static analysis tools designed for C/C++ exist, but they are not particularly useful for testing CUDA sources.
Since clang version 6 is able to compile CUDA, I wanted to check what are my options with using clang-tidy, which does not seem to have option for switching architectures.
Is there a way to make it work? For example compile time switch for turning on CUDA parser, extension in form of custom check, or is it maybe planned feature?
One of the problem with the clang-based tools is that they are not parsing the files in exactly the same way as clang does.
The first problem is that unlike C/C++ compilation, CUDA compilation compiles the source multiple times. By default clang creates multiple compilation jobs when you give it a CUDA file and that trips many tools that expect only one compilation. In order to work that around you need to pass --cuda-host-only option to clang-tidy.
You may also need to pass --cuda-path=/path/to/your/CUDA/install/root so clang can find CUDA headers.
Another problem you may run into would be related to include paths. Clang-derived tools do not have the same default include paths that clang itself uses and that occasionally causes weird problems. At the very least clang-tidy needs to find __clang_cuda_runtime_wrapper.h which is installed along with clang. If you run clang-tidy your-file.c -- -v it will print clang's arguments and include search paths it uses. Compare that to what clang -x c /dev/null -fsyntax-only -vprints. You may need to give clang-tidy extra include paths to match those used by clang itself. Note that you should not explicitly add the path to the CUDA includes here. It will be added in the right place automatically by --cuda-path=....
Once you have it all in place, clang-tidy should work on CUDA files.
Something like this:
clang-tidy your-file.cu -- --cuda-host-only --cuda-path=... -isystem /clang/includes -isystem /extra/system/includes
MSVC defines _DEBUG in debug mode, gcc defines NDEBUG in release mode. What macro can I use in clang to detect whether the code is being compiled for release or debug?
If you look at the project settings of your IDE, you will see that those macros are actually manually defined there, they are not automatically defined by the compiler. In fact, there is no way for the compiler to actually know if it's building a "debug" or "release", it just builds depending on the flags provided to it by the user (or IDE).
You have to make your own macros and define them manually, just like the IDE does for you when creating the projects.
Compilers don't define those macros. Your IDE/Makefile/<insert build system here> does. This doesn't depend on the compiler, but on the environment/build helper program you use.
The convention is to define the DEBUG macro in debug mode and the NDEBUG macro in release mode.
You can use the __OPTIMIZE__ flag to determine if optimization is taking place. That generally means it is not a debug build since optimizations often rearrange the code sequence. Trying to step through optimized code can be confusing.
This probably is what those most interested in this question really are attempting to figure out.
There is no such thing as a debug mode in a command line compiler. That is a IDE thing: it just sets up some options to be sent to the compiler.
If you use clang from the command line, you can use whatever you want. The same is true for gcc, so if with gcc you use NDEBUG you can use just the same.
My definition of powerful is ability to customize.
I'm familiar with gcc I wanted to try MSVC. So, I was searching for gcc equivalent options in msvc. I'm unable to find many of them.
controlling kind of output
Stop after the preprocessing stage; do not run the compiler proper.
gcc: -E
msvc: ???
Stop after the stage of compilation proper; do not assemble.
gcc: -S
msvc: ???
Compile or assemble the source files, but do not link.
gcc: -c
msvc:/c
Useful for debugging
Print (on standard error output) the commands executed to run the stages of compilation.
gcc: -v
msvc: ???
Store the usual “temporary” intermediate files permanently;
gcc: -save-temps
msvc: ???
Is there some kind of gcc <--> msvc compiler option mapping guide?
gcc Option Summary lists more options in each section than Compiler Options Listed by Category. There are hell lot of important and interesting things missing in msvc. Am I missing something or msvc is really less powerful than gcc.
MSVC is an IDE, gcc is just a compiler. CL (the MSVC compiler) can do most of the steps that you are describing from gcc's point of view. CL /? gives help.
E.g.
Pre-process to stdout:
CL /E
Compile without linking:
CL /c
Generate assembly (unlike gcc, though, this doesn't prevent compiling):
CL /Fa
CL is really just a compiler, if you want to see what commands the IDE generates for compiling and linking the easiest thing to look at the the command line section of the property pages for an item in the IDE. CL doesn't call a separate preprocessor or assembler, though, so there are no separate commands to see.
For -save-temps, the IDE performs separate compiling and linking so object files are preserved anyway. To preserve pre-processor output and assembler output you can enable the /P and /Fa through the IDE.
gcc and CL are different but I wouldn't say that the MSVC lacks "a hell lot" of things, certainly not the outputs that you are looking for.
For the equivalent of -E, cl.exe has /P (it doesn't "stop after preprocessing stage" but it outputs the preprocessor output to a file, which is largely the same thing).
For -S, it's a little murkier, since the "compilation" and "assembling" steps happen in multiple places depending on what other options you have specified (for example, if you have whole program optimization turned on, then machine code is not generated until the link stage).
For -v, Visual C++ is not the same as GCC. It executes all stages of compilation directly in cl.exe (and link.exe) so there are no "commands executed" to display. Similarly for -save-temps: because everything happens inside cl.exe and link.exe directly, the only "temporary" files are the .obj files that cl.exe produces and they're always saved anyway.
At the end of the day, though, GCC is an open source project. That means anybody with an itch to scratch can add whatever command-line options they like with relatively little resistance. For Visual C++, a commercial closed-source product, every option needs to have a business case, design meetings, test plans and so on. Every new feature starts with minus 100 points.
Both compilers have a plethora of options for modifying... everything. I suspect that any option not present in either is an option for something not worth doing in the first place. Most "normal" users don't find a use for most of those options anyway.
If you're looking purely at the number of available options as a measure of "power" or "flexibility" then you'll probably find gcc to be the winner, simply because gcc handles many platforms other than Windows and has specific options for many of those platforms that you obviously won't find in MSVC. gcc (well, the gcc toolchain) also compiles a whole lot of languages beyond C and C++; I recently used it for Objective-C, for example.
EDIT: I'm with Dean in questioning the validity of your question. Yes, MSVC (cl) has options for the equivalent of many of gcc's options, but no, the number of options doesn't really mean much.
In short: Unless you're doing something very special, you'll find MSVC easily "powerful enough" on the Windows platform that you will likely not be missing any gcc options.