What are some examples of non-determinism in the C++ compiler? - c++

I'm looking for examples of code that triggers non-determinism in GCC or Clang's compilation process.
One prominent example is the usage of the __DATE__ macro.
GCC and Clang have a plethora of compiler flags to control the outcome of non-deterministic actions within the compiler eg. -frandom-seed and -fno-guess-branch-probability
Are there any small examples that are affected by these flags?
To be more precise:
$ c++ main.cpp -o main && shasum main
aabbccddee
$ c++ main.cpp -o main && shasum main
eeddccbbaa
I'm looking for macro-free code examples where multiple runs of the compiler lead to different outputs, but can be fixed by e.g. -frandom-seed
EDIT:
related: from the gcc docs:
-fno-guess-branch-probability:
Sometimes gcc will opt to use a randomized model to guess branch probabilities,
when none are available from either profiling feedback (-fprofile-arcs)
or __builtin_expect.
This means that different runs of the compiler on the same program
may produce different object code.
The default is -fguess-branch-probability at levels -O, -O2, -O3, -Os.

While old, this question is interesting for reproducible builds.
As you've stated, there are multiple source of non-determinism while compiling some C/C++ source.
Non-determinism in preprocessor
The preprocessor usually implements some numerous super macro which are changing between runs. There's the obvious __DATE__ and __TIME__ but also the non obvious __cplusplus or __STD_C_VERSION__ or __GNUC_PATCHLEVEL__ which can changes when the OS updates.
There's also the __FILE__ that will contain the path of the building environment (different from machine to machine).
Please notice that for the former macro, GCC observes the environment variable SOURCE_DATE_EPOCH to overwrite the date and time macro. Other compilers might have some other behavior.
Non-determinism in the compiler
The compiler might have different optimization strategies based on non-deterministic approach. You've cited one in GCC, but other might exists.
For MSVC, you might be interested in the /BREPRO compiler flag.
You'll have to RTFM for your compiler to know more.
Non-determinism in the linker
On some architecture, the linked object and/or library will contain a timestamp. MacOS is one of them. So for the same set of .o files, you'll get a different resulting executable.
Also, if you use Link Time Optimization, many compiler will create different versions of the .o files named randomly. Again for GCC, you'll use -frandom-seed=31415 to "fix" this randomness, but YMMV.
Non-determinism in the build-process
Sometimes repositories contain additional operation that are performed outside of the compilation stage. Like generating header files based on some configuration flags (or other steps).
In that case, this per-project's specific operations might not be deterministic either.
For a good overview of the deterministic builds, please refer to this post

Related

Is the preprocessor, assembler and linker a part of the compiler?

So I've been taught, as many of us have, that the compiler is a program that translates your human readable code into machine readable code. The more you look into it however, you learn that the "compilation process" is actually broken up into 4 different parts: the preprocessor, compiler, assembler and linker. I think not understanding where all these parts fit into place have confused me a bit.
Are all the steps described in a typical compilation process part of
the compiler program?
Or are things like the assembler and linker separate programs built
into IDE's along with compilers to generate code?
Does it depend on the compiler or programming language?
If separate, is the compiler responsible for just the assembly code
creation as well as optimizing the assembly code?
Are all the steps described in a typical compilation process part of the compiler program?
All the steps are required by the translation process. The process includes Preprocess, Compilation, assembly / machine code instruction generation, and producing an executable (e.g. linking).
A translator program, a.k.a. compiler, does not need to put all steps into one compiler executable.
For example, a program may be composed of more than one translation unit, so they can be compiled all at once, then the pieces can be linked together. Often separating compilation from linking is beneficial.
Or are things like the assembler and linker separate programs built into IDE's along with compilers to generate code?
Some IDE's like Eclipse, do not have built-in compilers or linkers. The Eclipse IDE is designed to work with various compilers and linker. The Eclipse IDE needs to be configured as to what tools it will use when building a program.
Does it depend on the compiler or programming language?
IDEs are usually independent from compilers and languages. The NetBeans IDE can be used with Java or C++ (similarly with Eclipse).
Some IDEs may have features that work better with one language than another, such as keyword highlighting.
If separate, is the compiler responsible for just the assembly code creation as well as optimizing the assembly code?
Assembly language creation is not a required part of the process.
Typically, compilers have an option you can supply in order to print an assembly language listing.
Some compilers emit executable code without going through the generation of assembly language.
The meaning of the term “compiler” depends on the context.
For the beginner, the compiler is the tool you use to create an executable program from your source code.
Delving a little deeper, one learns that with practical toolchains there is at least a division into compiler and linker.
And while the above two views have been based solely on tool usage, when one learns more about C++ one appreciates the division into preprocessing and compilation “proper”, i.e. a preprocessor and a compiler, and a linker, where the preprocessor produces text, the compiler produces object code, and the linker produces executables or libraries.
Delving even deeper into things one may start to differentiate between different internal phases of the compiler (in the trio above). Some compiler utilize an assembler, some generate code directly from an abstract syntax tree, some compilers go as far as using a whole C compiler at the end, just translating the language X source code to C source code. E.g. Eiffel compilers used to do this, and probably do it still. And C++ started out that way, as a front end to a C compiler.
And especially with the idea of just translating to C, one may call that part the real compiler, with the C compiler at the end as just one of the tools invoked by the compiler proper.
So, it depends very much on the context.

Is there a set of standard compiler options?

I am making a project using qmake and I want it to be easy to compile by many users. So far I was developing only for Linux and only for the gcc compiler. I would like my project to be compilable on other platforms too.
So far I passed the compiler options (which I found in the gcc documentation) to qmake like this:
QMAKE_CXXFLAGS += -std=c++14 \
-ffloat-store \
-O3 \
But then I realized that these options may not be valid on other compilers and tried to find equivalent options for other popular compilers, such as clang or Intel. To my surprise I found out that:
The optimization options -O0, -O1, -O2, -O3 are common to all three compilers.
The -std=c++11 and -std=c++14 options are common to gcc and clang
As far as I know, -ffloat-store and some other options are present only in gcc.
I wonder, is there some set of options, that is either formally or informally standard?
POSIX defines something about the c99 command (but AFAIK nothing about C++).
However, the qmake utility will usually be able to find out (or at least to expect) what is the C++ compiler and how to invoke it. Notice that it is generating a Makefile
Outside of Qt you might consider cmake or autoconf. They both generate Makefile-s.
See also this answer (on Programmers).
No.
The C++ standard doesn't cover any compiler option. This is something that can vary wildly between different implementations (or even different versions of the same implementation).
Off the top of my head, there's almost nothing in common between the major compilers.
The truth is, any non-trivial project requires some fine-tuning both in-code and the build process when you're aiming for multiple platforms. Your best bet is in my experience (and this is largely opinion-based), to craft the build process for each compiler to be as simple as possible, and fix most of the 'quirks' via pragmas in a platform/compiler specific include file that you "auto-include" everywhere.

Intermediate code from C++

I want to compile a C++ program to an intermediate code. Then, I want to compile the intermediate code for the current processor with all of its resources.
The first step is to compile the C++ program with optimizations (-O2), run the linker and do most of the compilation procedure. This step must be independent of operating system and architecture.
The second step is to compile the result of the first step, without the original source code, for the operating system and processor of the current computer, with optimizations and special instructions of the processor (-march=native). The second step should be fast and with minimal software requirements.
Can I do it? How to do it?
Edit:
I want to do it, because I want to distribute a platform independent program that can use all resources of the processor, without the original source code, instead of distributing a compilation for each platform and operating system. It would be good if the second step be fast and easy.
Processors of the same architecture may have different features. X86 processors may have SSE1, SSE2 or others, and they can be 32 or 64 bit. If I compile for a generic X86, it will lack of SSE optimizations. After many years, processors will have new features, and the program will need to be compiled for new processors.
Just a suggestion - google clang and LLVM.
How much do you know about compilers? You seem to treat "-O2" as some magical flag.
For instance, register assignment is a typical optimization. You definitely need to now how many registers are available. No point in assigning foo to register 16, and then discover in phase 2 that you're targetting an x86.
And those architecture-dependent optimizations can be quite complex. Inlining depends critically on call cost, and that in turn depends on architecture.
Once you get to "processor-specific" optimizations, things get really tricky. It's really tough for a platform-specific compiler to be truly "generic" in its generation of object or "intermediate" code at an appropriate "level": Unless it's something like "IL" (intermediate language) code (like the C#-IL code, or Java bytecode), it's really tough for a given compiler to know "where to stop" (since optimizations occur all over the place at different levels of the compilation when target platform knowledge exists).
Another thought: What about compiling to "preprocessed" source code, typically with a "*.i" extension, and then compile in a distributed manner on different architectures?
For example, most (all) the C and C++ compilers support something like:
cc /P MyFile.cpp
gcc -E MyFile.cpp
...each generates MyFile.i, which is the preprocessed file. Now that the file has included ALL the headers and other #defines, you can compile that *.i file to the target object file (or executable) after distributing it to other systems. (You might need to get clever if your preprocessor macros are specific to the target platform, but it should be quite straight-forward with your build system, which should generate the command line to do this pre-processing.)
This is the approach used by distcc to preprocess the file locally, so remote "build farms" need not have any headers or other packages installed. (You are guaranteed to get the same build product, no matter how the machines in the build farm are configured.)
Thus, it would similarly have the effect of centralizing the "configuration/pre-processing" for a single machine, but provide cross-compiling, platform-specific compiling, or build-farm support in a distributed manner.
FYI -- I really like the distcc concept, but the last update for that particular project was in 2008. So, I'd be interested in other similar tools/products if you find them. (In the mean time, I'm writing a similar tool.)

Is Visual C++ as powerful as gcc?

My definition of powerful is ability to customize.
I'm familiar with gcc I wanted to try MSVC. So, I was searching for gcc equivalent options in msvc. I'm unable to find many of them.
controlling kind of output
Stop after the preprocessing stage; do not run the compiler proper.
gcc: -E
msvc: ???
Stop after the stage of compilation proper; do not assemble.
gcc: -S
msvc: ???
Compile or assemble the source files, but do not link.
gcc: -c
msvc:/c
Useful for debugging
Print (on standard error output) the commands executed to run the stages of compilation.
gcc: -v
msvc: ???
Store the usual “temporary” intermediate files permanently;
gcc: -save-temps
msvc: ???
Is there some kind of gcc <--> msvc compiler option mapping guide?
gcc Option Summary lists more options in each section than Compiler Options Listed by Category. There are hell lot of important and interesting things missing in msvc. Am I missing something or msvc is really less powerful than gcc.
MSVC is an IDE, gcc is just a compiler. CL (the MSVC compiler) can do most of the steps that you are describing from gcc's point of view. CL /? gives help.
E.g.
Pre-process to stdout:
CL /E
Compile without linking:
CL /c
Generate assembly (unlike gcc, though, this doesn't prevent compiling):
CL /Fa
CL is really just a compiler, if you want to see what commands the IDE generates for compiling and linking the easiest thing to look at the the command line section of the property pages for an item in the IDE. CL doesn't call a separate preprocessor or assembler, though, so there are no separate commands to see.
For -save-temps, the IDE performs separate compiling and linking so object files are preserved anyway. To preserve pre-processor output and assembler output you can enable the /P and /Fa through the IDE.
gcc and CL are different but I wouldn't say that the MSVC lacks "a hell lot" of things, certainly not the outputs that you are looking for.
For the equivalent of -E, cl.exe has /P (it doesn't "stop after preprocessing stage" but it outputs the preprocessor output to a file, which is largely the same thing).
For -S, it's a little murkier, since the "compilation" and "assembling" steps happen in multiple places depending on what other options you have specified (for example, if you have whole program optimization turned on, then machine code is not generated until the link stage).
For -v, Visual C++ is not the same as GCC. It executes all stages of compilation directly in cl.exe (and link.exe) so there are no "commands executed" to display. Similarly for -save-temps: because everything happens inside cl.exe and link.exe directly, the only "temporary" files are the .obj files that cl.exe produces and they're always saved anyway.
At the end of the day, though, GCC is an open source project. That means anybody with an itch to scratch can add whatever command-line options they like with relatively little resistance. For Visual C++, a commercial closed-source product, every option needs to have a business case, design meetings, test plans and so on. Every new feature starts with minus 100 points.
Both compilers have a plethora of options for modifying... everything. I suspect that any option not present in either is an option for something not worth doing in the first place. Most "normal" users don't find a use for most of those options anyway.
If you're looking purely at the number of available options as a measure of "power" or "flexibility" then you'll probably find gcc to be the winner, simply because gcc handles many platforms other than Windows and has specific options for many of those platforms that you obviously won't find in MSVC. gcc (well, the gcc toolchain) also compiles a whole lot of languages beyond C and C++; I recently used it for Objective-C, for example.
EDIT: I'm with Dean in questioning the validity of your question. Yes, MSVC (cl) has options for the equivalent of many of gcc's options, but no, the number of options doesn't really mean much.
In short: Unless you're doing something very special, you'll find MSVC easily "powerful enough" on the Windows platform that you will likely not be missing any gcc options.

Optimization and flags for making a static library with g++

I am just starting with g++ compiler on Linux and got some questions on the compiler flags. Here are they
Optimizations
I read about optimization flags -O1, -O2 and -O3 in the g++ manual page. I didn't understood when to use these flags. Usually what optimization level do you use? The g++ manual says the following for -O2.
Optimize even more. GCC performs nearly all supported optimizations that do not involve a space-speed tradeoff. The compiler does not perform loop unrolling or function inlining when you specify -O2. As compared to -O, this option increases both compilation time and the performance of the generated code.
If it is not doing inlining and loop unrolling, how the said performance befits are achieved and is this option recommended?
Static Library
How do I create a static library using g++? In Visual Studio, I can choose a class library project and it will be compiled into "lib" file. What is the equivalent in g++?
The rule of thumb:
When you need to debug, use -O0 (and -g to generate debugging symbols.)
When you are preparing to ship it, use -O2.
When you use gentoo, use -O3...!
When you need to put it on an embedded system, use -Os (optimize for size, not for efficiency.)
The gcc manual list all implied options by every optimization level. At O2, you get things like constant folding, branch prediction and co, which can change significantly the speed of your application, depending on your code. The exact options are version dependent, but they are documented in great detail.
To build a static library, you use ar as follows:
ar rc libfoo.a foo.o foo2.o ....
ranlib libfoo.a
Ranlib is not always necessary, but there is no reason for not using it.
Regarding when to use what optimization option - there is no single correct answer.
Certain optimization levels may, at times, decrease performance. It depends on the kind of code you are writing and the execution pattern it has, and depends on the specific CPU you are running on.
(To give a simple canonical example - the compiler may decide to use an optimization that makes your code slightly larger than before. This may cause a certain part of the code to no longer fit into the instruction cache, at which point many more accesses to memory would be required - in a loop, for example).
It is best to measure and optimize for whatever you need. Try, measure and decide.
One important rule of thumb - the more optimizations are performed on your code, the harder it is to debug it using a debugger (or read its disassembly), because the C/C++ source view gets further away from the generated binary. It is a good rule of thumb to work with fewer optimizations when developing / debugging for this reason.
There are many optimizations that a compiler can perform, other than loop unrolling and inlining. Loop unrolling and inlining are specifically mentioned there since, although they make the code faster, they also make it larger.
To make a static library, use 'g++ -c' to generate the .o files and 'ar' to archive them into a library.
In regards to the Static library question the answer given by David Cournapeau is correct but you can alternatively use the 's' flag with 'ar' rather than running ranlib on your static library file. The 'ar' manual page states that
Running ar s on an archive is equivalent to running ranlib on it.
Whichever method you use is just a matter of personal preference.