According to this post the Google C++ Testing Framework considers "make install" a bad practice.
http://groups.google.com/group/googletestframework/browse_thread/thread/668eff1cebf5309d
The reason for this is that this library violates the "One Definition Rule".
http://en.wikipedia.org/wiki/One_Definition_Rule
Somewhere further in the thread it says: "If you pass different -DGTEST_HAS_FOO=1 flags to different translation units, you'll violate the ODR. Or sometimes people use -D
to select which malloc library to use (debug vs release), and you have
to use the same malloc library across the board."
My questions:
What is it exactly this project is doing wrong?
What can we learn from this? How can we write code more defensive to prevent violating the ODR?
The straight answer to the question would be: do not write code that depends on compiler parameters. In this case, the whole discussion stems from the fact that the code is different based on compiler flags (most probably by means of #ifdef or other preprocessor directives). That in turn means that while the codebase is the same, by changing compiler flags the processed code will be different, and a binary compiled with one set of flags will not be compatible with a binary compiled with a different set of flags.
Depending on the actual project, it might be impossible to decouple the code from compiler flags, and you will have to live with it, but I would recommend avoiding in as much as possible code that can be configured from the compiler command line, in the same way that you should avoid DEBUG only code with side effects. And when you cannot, document the effect that different compiler flags have.
Related
I'm trying to emulate C asserts in Fortran in order to enforce pre- and post-conditions of all my procedures. This way, I get to provide the user with more detailed information about run-time errors than I could reasonably be expected to maintain otherwise.
In order to implement this, I used the preprocessor directives __FILE__ and __LINE__, and defining an assert macro which expands to a Fortran subroutine call. Rather than try to describe it here, I made a git repo with a little bit of example code in it. If you build it with
make test
./test
the function hangs, because you called a function that expects a positive argument with a negative one. However, if you build with
make test DEBUG=1
./test
the error is caught by an assertion.
This works with both gfortran and the Intel Fortran compiler. I don't have access to other Fortran compilers. Can I reasonably expect other compilers to do the necessary source pre-processing if the file extension is .F90? Or should I be relying on the -cpp flag? What's the most portable way to do this? Should I even do this at all?
It is reasonable to expect that a suitable "C-like" preprocessor is available, though undoubtedly there will be exceptions for some compilers or tools.
The definition of portability can depend on your perspective, but given that a large number of systems run with a case insensitive file system, it is not reasonable to solely rely on case alone to specify that the preprocessor needs to be run. Most Fortran related build systems will have some way of making that specification explicit.
Whether this is a good idea is a bit more subjective. Perhaps it is only nominal in terms of impact, but requiring the preprocessor still represents a reduction in portability and an increase in build complexity. Depending on compiler, use of the preprocessor may hinder use of things like standard conformance diagnostics.
Consequently, my preference for relatively simple use cases like this is to have the assertion coded as normal Fortran source - an if statement testing a named constant from a debug module or similar, invoking [ERROR] STOP (not exit) with a descriptive message if the assertion expression fails.
USE DebuggingFlags
IF (debug_flag) THEN
IF (x <= 0) ERROR STOP 'negative or zero x in sqrrt!'
END IF
This won't give you file and line information, but as long as you are somewhat selective with the STOP message the relevant source shouldn't be too hard to locate.
"Release" builds are made with the debugging flag constant defined as false (or your chosen equivalent) and with any sort of reasonable compiler optimisation active the object code associated with the assertion should be identified and eliminated as dead.
But there are pros and cons.
Fortran 95+ has conditional compilation (coco) defined in standard, but only few compilers support it. The de facto standard (since Fortran 90 I believe) is still cpp and fpp, which most compilers support. Both are highly compatible but not 100%. Using these facts, relying on cpp/fpp style preprocessor should be safe for most cases.
Adding to the answer of #LeleDumbo:
As far as I can tell, the Fortran 2008 Standard does not specify any form of preprocessing. This does also mean, that there is no standard way of specifying, how to invoke the preprocessor. However, it is common practice to use .F90 to specify the need for preprocessing.
Concerning COCO: The third part of the Fortran Standard ISO 1539-3, which should specify conditional compilation, was withdrawn.
I want for my program to use C++11 standard and O3 optimization.
Normally I would just use compilation flags: -std=c++11 and -O3, but I have to send sourcefile to a remote server where it is compiled automatically. So basically the only thing that comes to my mind is to use some neat macro. I am pretty sure it is possible to define optimization that way cause I saw it somewhere, but still I cannot remember how it looked like.
What you're asking for is outside the C++ spec.
So if it were supported at all--anywhere--it would be supported via pragmas. I don't know of any pragma to say "compile as C++11", so I'd call that a lost cause. There do seem to be some optimization pragmas out there:
http://gcc.gnu.org/onlinedocs/gcc/Function-Specific-Option-Pragmas.html
But bear in mind that using a #pragma is entirely outside of the spec. If the compiler hits one, it can do anything it likes and still be standard. That includes launching video games (and this has been done)
So long story short: if this is a requirement for you, then you need to tell whoever this is providing your compilation service that you need certain compiler settings for your program.
TL;DR
Protection against binary incompatibility resulting from compiler argument typos in shared, possibly templated headers' preprocessor directives, which control conditional compilation, in different compilation units?
Ex.
g++ ... -DYOUR_NORMAl_FLAG ... -o libA.so
/**Another compilation unit, or even project. **/
g++ ... -DYOUR_NORMA1_FLAG ... -o libB.so
/**Another compilation unit, or even project. **/
g++ ... -DYOUR_NORMAI_FLAG ... main.cpp libA.so //The possibilities!
The Basic Story
Recently, I ran into a strange bug: the symptom was a single SIGSEGV, which always seemed to occur at the same location after recompling. This led me to believe there was some kind of memory corruption going on, and the actual underlying pointer is not a pointer at all, but some data section.
I save you from the long and strenuous journey taking almost two otherwise perfectly good work days to track down the problem. Sufficient to say, Valgrind, GDB, nm, readelf, electric fence, GCC's stack smashing protection, and then some more measures/methods/approaches failed.
In utter devastation, my attention turned to the finest details in the build process, which was analogous to:
Build one small library.
Build one large library, which uses the small one.
Build the test suite of the large library.
Only in case when the large library was used as a static, or a dynamic library dependency (ie. the dynamic linker loaded it automatically, no dlopen)
was there a problem. The test case where all the code of the library was simply included in the tests, everything worked: this was the most important clue.
The "Solution"
In the end, it turned out to be the simplest thing: a single (!) typo.
Turns out, the compilation flags differed by a single char in the test suite, and the large library: a define, which was controlling the behavior of the small library, was misspelled.
Critical information morsel: the small library had some templates. These were used directly in every case, without explicit instantiation in advance. The contents of one of the templated classes changed when the flag was toggled: some data fields were simply not present in case the flag was defined!
The linker noticed nothing of this. (Since the class was templated, the resultant symbols were weak.)
The code used dynamic casts, and the class affected by this problem was inheriting from the mangled class -> things went south.
My question is as follows: how would you protect against this kind of problem? Are there any tools or solutions which address this specific issue?
Future Proofing
I've thought of two things, and believe no protection can be built on the object file level:
1: Save options implemented as preprocessor symbols in some well defined place, preferably extracted by a separate build step. Provide check script which uses this to check all compiler defines, and defines in user code. Integrate this check into the build process. Possibly use Levenshtein distance or similar to check for misspellings. Expensive, and the script / solution can get complicated. Possible problem with similar flags (but why have them?), additional files must accompany compiled library code. (Well, maybe with DWARF 2, this is untrue, but let's assume we don't want that.)
2: Centralize build options: cheap, customization option left open (think makefile.local), but makes monolithic monstrosities, strong project couplings.
I'd like to go ahead and quench a few likely flame inducing embers possibly flaring up in some readers: "do not use preprocessor symbols" is not an option here.
Conditional compilation does have it's place in high performance code, and doing everything with templates and enable_if-s would needlessly overcomplicate things. While the above solution is usually not desirable it can arise form the development process.
Please assume you have no control over the situation, assume you have legacy code, assume everything you can to force yourself to avoid side-stepping.
If those won't do, generalize into ABI incompatibility detection, though this might escalate the scope of the question too much for SO.
I'm aware of:
http://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html
DT_SONAME is not applicable.
Other version schemes therein are not applicable either - they were designed to protect a package which is in itself not faulty.
Mixing C++ ABIs to build against legacy libraries
Static analysis tool to detect ABI breaks in C++
If it matters, don't have a default case.
#ifdef YOUR_NORMAL_FLAG
// some code
#elsif YOUR_SPECIAL_FLAG
// some other code
#else
// in case of a typo, this is a compilation error
# error "No flag specified"
#endif
This may lead to a large list of compiler options if conditional compilation is overused, but there are ways around this like defining config-files
flag=normal
flag2=special
which get parsed by build scripts and generate the options and can possibly check for typos or could be parsed directly from the Makefile.
Consider a situation. We have some specific C++ compiler, a specific set of compiler settings and a specific C++ program.
We compile that specific programs with that compiler and those settings two times, doing a "clean compile" each time.
Should the machine code emitted be the same (I don't mean timestamps and other bells and whistles, I mean only real code that will be executed) or is it allowed to vary from one compilation to another?
The C++ standard certainly doesn't say anything to prevent this from happening. In reality, however, a compiler is normally deterministic, so given identical inputs it will produce identical output.
The real question is mostly what parts of the environment it considers as its inputs -- there are a few that seem to assume characteristics of the build machine reflect characteristics of the target, and vary their output based on "inputs" that are implicit in the build environment instead of explicitly stated, such as via compiler flags. That said, even that is relatively unusual. The norm is for the output to depend on explicit inputs (input files, command line flags, etc.)
Offhand, I can only think of one fairly obvious thing that changes "spontaneously": some compilers and/or linkers embed a timestamp into their output file, so a few bytes of the output file will change from one build to the next--but this will only be in the metadata embedded in the file, not a change to the actual code that's generated.
According to the as-if rule in the standard, as long as a conforming program (e.g., no undefined behavior) cannot tell the difference, the compiler is allowed to do whatever it wants. In other words, as long as the program produces the same output, there is no restriction in the standard prohibiting this.
From a practical point of view, I wouldn't use a compiler that does this to build production software. I want to be able to recompile a release made two years ago (with the same compiler, etc) and produce the same machine code. I don't want to worry that the reason I can't reproduce a bug is that the compiler decided to do something slightly different today.
There is no guarantee that they will be the same. Also according to http://www.mingw.org/wiki/My_executable_is_sometimes_different
My executable is sometimes different, when I compile and recompile the same source. Is this normal?
Yes, by default, and by design, ~MinGW's GCC does not produce ConsistentOutput, unless you patch it.
EDIT: Found this post that seems to explain how to make them the same.
I'd bet it would vary every time due to some metadata compiler writes (for instance, c# compiled dlls always vary in some bytes even if I do "build" twice in a row without changing anything). But anyways, I would never rely on that it would not vary.
I'm just starting to explore C++, so forgive the newbiness of this question. I also beg your indulgence on how open ended this question is. I think it could be broken down, but I think that this information belongs in the same place.
(FYI -- I am working predominantly with the QT SDK and mingw32-make right now and I seem to have configured them correctly for my machine.)
I knew that there was a lot in the language which is compiler-driven -- I've heard about pre-compiler directives, but it seems like someone would be able to write books the different C++ compilers and their respective parameters. In addition, there are commands which apparently precede make (like qmake, for example (is this something only in QT)).
I would like to know if there is any place which gives me an overview of what compilers are out there, and what their different options are. I'd also like to know how each of them views Makefiles (it seems that there is a difference in syntax between them?).
If there is no website regarding, "Everything you need to know about C++ compilers but were afraid to ask," what would be the best way to go about learning the answers to these questions?
Concerning the "numerous options of the various compilers"
A piece of good news: you needn't worry about the detail of most of these options. You will, in due time, delve into this, only for the very compiler you use, and maybe only for the options that pertain to a particular set of features. But as a novice, generally trust the default options or the ones supplied with the make files.
The broad categories of these features (and I may be missing a few) are:
pre-processor defines (now, you may need a few of these)
code generation (target CPU, FPU usage...)
optimization (hints for the compiler to favor speed over size and such)
inclusion of debug info (which is extra data left in the object/binary and which enables the debugger to know where each line of code starts, what the variables names are etc.)
directives for the linker
output type (exe, library, memory maps...)
C/C++ language compliance and warnings (compatibility with previous version of the compiler, compliance to current and past C Standards, warning about common possible bug-indicative patterns...)
compile-time verbosity and help
Concerning an inventory of compilers with their options and features
I know of no such list but I'm sure it probably exists on the web. However, suggest that, as a novice you worry little about these "details", and use whatever free compiler you can find (gcc certainly a great choice), and build experience with the language and the build process. C professionals may likely argue, with good reason and at length on the merits of various compilers and associated runtine etc., but for generic purposes -and then some- the free stuff is all that is needed.
Concerning the build process
The most trivial applications, such these made of a single unit of compilation (read a single C/C++ source file), can be built with a simple batch file where the various compiler and linker options are hardcoded, and where the name of file is specified on the command line.
For all other cases, it is very important to codify the build process so that it can be done
a) automatically and
b) reliably, i.e. with repeatability.
The "recipe" associated with this build process is often encapsulated in a make file or as the complexity grows, possibly several make files, possibly "bundled together in a script/bat file.
This (make file syntax) you need to get familiar with, even if you use alternatives to make/nmake, such as Apache Ant; the reason is that many (most?) source code packages include a make file.
In a nutshell, make files are text files and they allow defining targets, and the associated command to build a target. Each target is associated with its dependencies, which allows the make logic to decide what targets are out of date and should be rebuilt, and, before rebuilding them, what possibly dependencies should also be rebuilt. That way, when you modify say an include file (and if the make file is properly configured) any c file that used this header will be recompiled and any binary which links with the corresponding obj file will be rebuilt as well. make also include options to force all targets to be rebuilt, and this is sometimes handy to be sure that you truly have a current built (for example in the case some dependencies of a given object are not declared in the make).
On the Pre-processor:
The pre-processor is the first step toward compiling, although it is technically not part of the compilation. The purposes of this step are:
to remove any comment, and extraneous whitespace
to substitute any macro reference with the relevant C/C++ syntax. Some macros for example are used to define constant values such as say some email address used in the program; during per-processing any reference to this constant value (btw by convention such constants are named with ALL_CAPS_AND_UNDERSCORES) is replace by the actual C string literal containing the email address.
to exclude all conditional compiling branches that are not relevant (the #IFDEF and the like)
What's important to know about the pre-processor is that the pre-processor directive are NOT part of the C-Language proper, and they serve several important functions such as the conditional compiling mentionned earlier (used for example to have multiple versions of the program, say for different Operating Systems, or indeed for different compilers)
Taking it from there...
After this manifesto of mine... I encourage to read but little more, and to dive into programming and building binaries. It is a very good idea to try and get a broad picture of the framework etc. but this can be overdone, a bit akin to the exchange student who stays in his/her room reading the Webster dictionary to be "prepared" for meeting native speakers, rather than just "doing it!".
Ideally you shouldn't need to care what C++ compiler you are using. The compatability to the standard has got much better in recent years (even from microsoft)
Compiler flags obviously differ but the same features are generally available, it's just a differently named option to eg. set warning level on GCC and ms-cl
The build system is indepenant of the compiler, you can use any make with any compiler.
That is a lot of questions in one.
C++ compilers are a lot like hammers: They come in all sizes and shapes, with different abilities and features, intended for different types of users, and at different price points; ultimately they all are for doing the same basic task as the others.
Some are intended for highly specialized applications, like high-performance graphics, and have numerous extensions and libraries to assist the engineer with those types of problems. Others are meant for general purpose use, and aren't necessarily always the greatest for extreme work.
The technique for using each type of hammer varies from model to model—and version to version—but they all have a lot in common. The macro preprocessor is a standard part of C and C++ compilers.
A brief comparison of many C++ compilers is here. Also check out the list of C compilers, since many programs don't use any C++ features and can be compiled by ordinary C.
C++ compilers don't "view" makefiles. The rules of a makefile may invoke a C++ compiler, but also may "compile" assembly language modules (assembling), process other languages, build libraries, link modules, and/or post-process object modules. Makefiles often contain rules for cleaning up intermediate files, establishing debug environments, obtaining source code, etc., etc. Compilation is one link in a long chain of steps to develop software.
Also, many development environments abstract the makefile into a "project file" which is used by an integrated development environment (IDE) in an attempt to simplify or automate many programming tasks. See a comparison here.
As for learning: choose a specific problem to solve and dive in. The target platform (Linux/Windows/etc.) and problem space will narrow the choices pretty well. Which you choose is often linked to other considerations, such as working for a particular company, or being part of a team. C++ has something like 95% commonality among all its flavors. Learn any one of them well, and learning the next is a piece of cake.