Is Visual C++ as powerful as gcc? - c++

My definition of powerful is ability to customize.
I'm familiar with gcc I wanted to try MSVC. So, I was searching for gcc equivalent options in msvc. I'm unable to find many of them.
controlling kind of output
Stop after the preprocessing stage; do not run the compiler proper.
gcc: -E
msvc: ???
Stop after the stage of compilation proper; do not assemble.
gcc: -S
msvc: ???
Compile or assemble the source files, but do not link.
gcc: -c
msvc:/c
Useful for debugging
Print (on standard error output) the commands executed to run the stages of compilation.
gcc: -v
msvc: ???
Store the usual “temporary” intermediate files permanently;
gcc: -save-temps
msvc: ???
Is there some kind of gcc <--> msvc compiler option mapping guide?
gcc Option Summary lists more options in each section than Compiler Options Listed by Category. There are hell lot of important and interesting things missing in msvc. Am I missing something or msvc is really less powerful than gcc.

MSVC is an IDE, gcc is just a compiler. CL (the MSVC compiler) can do most of the steps that you are describing from gcc's point of view. CL /? gives help.
E.g.
Pre-process to stdout:
CL /E
Compile without linking:
CL /c
Generate assembly (unlike gcc, though, this doesn't prevent compiling):
CL /Fa
CL is really just a compiler, if you want to see what commands the IDE generates for compiling and linking the easiest thing to look at the the command line section of the property pages for an item in the IDE. CL doesn't call a separate preprocessor or assembler, though, so there are no separate commands to see.
For -save-temps, the IDE performs separate compiling and linking so object files are preserved anyway. To preserve pre-processor output and assembler output you can enable the /P and /Fa through the IDE.
gcc and CL are different but I wouldn't say that the MSVC lacks "a hell lot" of things, certainly not the outputs that you are looking for.

For the equivalent of -E, cl.exe has /P (it doesn't "stop after preprocessing stage" but it outputs the preprocessor output to a file, which is largely the same thing).
For -S, it's a little murkier, since the "compilation" and "assembling" steps happen in multiple places depending on what other options you have specified (for example, if you have whole program optimization turned on, then machine code is not generated until the link stage).
For -v, Visual C++ is not the same as GCC. It executes all stages of compilation directly in cl.exe (and link.exe) so there are no "commands executed" to display. Similarly for -save-temps: because everything happens inside cl.exe and link.exe directly, the only "temporary" files are the .obj files that cl.exe produces and they're always saved anyway.
At the end of the day, though, GCC is an open source project. That means anybody with an itch to scratch can add whatever command-line options they like with relatively little resistance. For Visual C++, a commercial closed-source product, every option needs to have a business case, design meetings, test plans and so on. Every new feature starts with minus 100 points.

Both compilers have a plethora of options for modifying... everything. I suspect that any option not present in either is an option for something not worth doing in the first place. Most "normal" users don't find a use for most of those options anyway.
If you're looking purely at the number of available options as a measure of "power" or "flexibility" then you'll probably find gcc to be the winner, simply because gcc handles many platforms other than Windows and has specific options for many of those platforms that you obviously won't find in MSVC. gcc (well, the gcc toolchain) also compiles a whole lot of languages beyond C and C++; I recently used it for Objective-C, for example.
EDIT: I'm with Dean in questioning the validity of your question. Yes, MSVC (cl) has options for the equivalent of many of gcc's options, but no, the number of options doesn't really mean much.
In short: Unless you're doing something very special, you'll find MSVC easily "powerful enough" on the Windows platform that you will likely not be missing any gcc options.

Related

Are there any downsides to compiling with -g flag?

GDB documentation tells me that in order to compile for debugging, I need to ask my compiler to generate debugging symbols. This is done by specifying a '-g' flag.
Furthermore, GDB doc recommends I'd always compile with a '-g' flag. This sounds good, and I'd like to do that.
But first, I'd like to find out about downsides. Are there any penalties involved with compiling-for-debugging in production code?
I am mostly interested in:
GCC as the compiler of choice
Red hat Linux as target OS
C and C++ languages
(Although information about other environments is welcome as well)
Many thanks!
If you use -g (which on recent GCC or Clang can be used with optimization flags like -O2):
compilation time is slower (and linking will use a lot more memory)
the executable is a bigger file (see elf(5) and use readelf(1)...)
the executable carries a lot of information about your source code.
you can use GDB easily
some interesting libraries, like Ian Taylor's libbacktrace, requires DWARF information (e.g. -g)
If you don't use -g it would be harder to use the GDB debugger (but possible).
So if you transmit the binary executable to a partner that should not understand how your source code was written, you need to avoid -g
See also the strip(1) and strace(1) commands.
Notice that using the -g flag for debugging information is also valid for Ocaml, Rust
PS. Recent GCC (e.g. GCC 10 or GCC 11 in 2021) accept many debugger flags. With -g3 your executable carries more debug information (e.g. description of C++ macros and their expansion) that with -g or -g1. Of course, compilation time increases, and executable size also. In principle, your GCC plugin (perhaps Bismon in 2021, or those inside the source code of the Linux kernel) could add even more debug information. In practice, you won't do that unless you can improve your debugger. However, a GCC plugin (or some #pragmas) can remove some debug information (e.g. remove debug information for a selected set of functions).
Generally, adding debug information increases the size of the binary files (or creates extra files for the debug information). That's nowadays usually not a problem, unless you're distributing it over slow networks. And of course this debug information may help others in analyzing your code, if they want to do that. Typically, the -g flag is used together with -O0 (the default), which disables compiler optimization and generates code that is as close as possible to the source, so that debugging is easier. While you can use debug information together with optimizations enabled, this is really tricky, because variables may not exist, or the sequence of instructions may be different than in the source. This is generally only done if an error needs to be analyzed that only happens after the optimizations are enabled. Of course, the downside of -O0 is poorer performance.
So as a conclusion: Typically one uses -g -O0 during development, and for distribution or production code just -O3.

What are some examples of non-determinism in the C++ compiler?

I'm looking for examples of code that triggers non-determinism in GCC or Clang's compilation process.
One prominent example is the usage of the __DATE__ macro.
GCC and Clang have a plethora of compiler flags to control the outcome of non-deterministic actions within the compiler eg. -frandom-seed and -fno-guess-branch-probability
Are there any small examples that are affected by these flags?
To be more precise:
$ c++ main.cpp -o main && shasum main
aabbccddee
$ c++ main.cpp -o main && shasum main
eeddccbbaa
I'm looking for macro-free code examples where multiple runs of the compiler lead to different outputs, but can be fixed by e.g. -frandom-seed
EDIT:
related: from the gcc docs:
-fno-guess-branch-probability:
Sometimes gcc will opt to use a randomized model to guess branch probabilities,
when none are available from either profiling feedback (-fprofile-arcs)
or __builtin_expect.
This means that different runs of the compiler on the same program
may produce different object code.
The default is -fguess-branch-probability at levels -O, -O2, -O3, -Os.
While old, this question is interesting for reproducible builds.
As you've stated, there are multiple source of non-determinism while compiling some C/C++ source.
Non-determinism in preprocessor
The preprocessor usually implements some numerous super macro which are changing between runs. There's the obvious __DATE__ and __TIME__ but also the non obvious __cplusplus or __STD_C_VERSION__ or __GNUC_PATCHLEVEL__ which can changes when the OS updates.
There's also the __FILE__ that will contain the path of the building environment (different from machine to machine).
Please notice that for the former macro, GCC observes the environment variable SOURCE_DATE_EPOCH to overwrite the date and time macro. Other compilers might have some other behavior.
Non-determinism in the compiler
The compiler might have different optimization strategies based on non-deterministic approach. You've cited one in GCC, but other might exists.
For MSVC, you might be interested in the /BREPRO compiler flag.
You'll have to RTFM for your compiler to know more.
Non-determinism in the linker
On some architecture, the linked object and/or library will contain a timestamp. MacOS is one of them. So for the same set of .o files, you'll get a different resulting executable.
Also, if you use Link Time Optimization, many compiler will create different versions of the .o files named randomly. Again for GCC, you'll use -frandom-seed=31415 to "fix" this randomness, but YMMV.
Non-determinism in the build-process
Sometimes repositories contain additional operation that are performed outside of the compilation stage. Like generating header files based on some configuration flags (or other steps).
In that case, this per-project's specific operations might not be deterministic either.
For a good overview of the deterministic builds, please refer to this post

Use clang-tidy on CUDA source files

Several static analysis tools designed for C/C++ exist, but they are not particularly useful for testing CUDA sources.
Since clang version 6 is able to compile CUDA, I wanted to check what are my options with using clang-tidy, which does not seem to have option for switching architectures.
Is there a way to make it work? For example compile time switch for turning on CUDA parser, extension in form of custom check, or is it maybe planned feature?
One of the problem with the clang-based tools is that they are not parsing the files in exactly the same way as clang does.
The first problem is that unlike C/C++ compilation, CUDA compilation compiles the source multiple times. By default clang creates multiple compilation jobs when you give it a CUDA file and that trips many tools that expect only one compilation. In order to work that around you need to pass --cuda-host-only option to clang-tidy.
You may also need to pass --cuda-path=/path/to/your/CUDA/install/root so clang can find CUDA headers.
Another problem you may run into would be related to include paths. Clang-derived tools do not have the same default include paths that clang itself uses and that occasionally causes weird problems. At the very least clang-tidy needs to find __clang_cuda_runtime_wrapper.h which is installed along with clang. If you run clang-tidy your-file.c -- -v it will print clang's arguments and include search paths it uses. Compare that to what clang -x c /dev/null -fsyntax-only -vprints. You may need to give clang-tidy extra include paths to match those used by clang itself. Note that you should not explicitly add the path to the CUDA includes here. It will be added in the right place automatically by --cuda-path=....
Once you have it all in place, clang-tidy should work on CUDA files.
Something like this:
clang-tidy your-file.cu -- --cuda-host-only --cuda-path=... -isystem /clang/includes -isystem /extra/system/includes

Run time Debugging

We have recently downloaded, installed and compiled gcc-3.0.4 code. gcc compiler has built successfully and we where able to compile some same test cpp file. I would like to know how we can modify gcc source code so that we add additional run time debugging statements like the binary in execution compiled by my gcc should print below statement in a log file:
filename.cpp::FunctionName#linenumber-statement
or any additional information that I can insert via this tailored compiler code.
Have you looked at the macros __FILE__ and __LINE__? They do that for you without modifying the compiler. See here for more information.
My general understand of the GCC architecture is, that it is divided into front-end (parser), middle (optimization in a special intermediate language), and a back-end (generating platform dependent output). So, for your purposes you would have to look into the back-end part.
Don't do that with an ancient compiler like GCC 3.0.
With a recent GCC 4.9 (in end of 2014 or january 2015) you could customize the compiler, e.g. with a MELT extension, which would add a new optimization pass working on Gimple. That pass would insert a Gimple statement (hopefully a call to some debugging print) before each Gimple call statement.
This is a non-trivial work (perhaps weeks of work). You need to understand all of Gimple

What is the difference between bcc32 and bcc32ide in Borland?

We find a slight difference in how bcc32 and bcc32ide behave on a
file.
bcc32ide works and bcc32 crashes.
Would there be any detrimental side effects to switching to bcc32ide for our automated builds?
Also, what is the difference between these two compilers (other than one crashes and one doesn't)?
See Re: BCB 6 Compiler slowness and how to speed it up dramatically for an explanation.
Quote:
bcc32ide.exe is a bcc32.exe, but it uses the IDE compiler. For
example, some compiler assertions do not raise with the IDE compiler, but
with bcc32.exe. So replacing bcc32.exe in makefiles by bcc32ide.exe will
allow to compile projects that compile within the IDE, but not at console.
bcc32ide.exe also adds some additional parameters:
-automake Compile only modified files
-verbose Output current compiled line number during compilation
-pch=filename_pch.h Use advanced precompiled headers
-FIfilename Force include (multiple, usage like -Iincludedir)
And it supports multi-process shared access to the advanced PCH file
(required for mtbcc32.exe).