So at work I'm working on a C++ application that runs without the C++ run-time library. We're using Visual Studio 2005, and have the /NODEFAULTLIB switch specified.
The solution is organized such that there are various static library projects, and then a single executable project which uses those libraries. The libraries are mostly common libraries tracked in a separate repository. They can be changed, but it's best for us not to, if we can avoid it.
One of those common libraries uses floating-point math. Since we don't have the C++ run-time, we have defined these routines ourself (ex: _ftol2_sse for converting float to int).
From my (rather limited) understanding of the low-level details, the compiler emits the symbol _fltused signal that floating point math routines need to be used.
For some reason, one of the other common libraries decides to define this symbol manually, as
extern "C" { unsigned short _fltused = 0; };
When I enable Whole Program Optimization and Link-time code generation, I get
warning C4743: '_fltused' has different size ...
when linking. I don't know why we have it declared as an unsigned short instead of int, but that's how it is.
When I don't enable Whole Program Optimization or LTCG, the warning goes away.
I guess I have two questions.
Can I safely ignore this warning?
What optimization is being made that causes the warning to occur? I'm not sure why it's not a warning without Whole Program Optimization enabled.
UPDATE
I was able to track down the original author of the code, who admits that it is a bug that occurred when rewriting the code from assembly language. He agrees with me that the warning is harmless, since _fltused is never actually used directly.
I think I can answer 2 at least. It's not any specific optimization being made. It's the fact that whole program optimization forces the compiler to keep some representation of the whole program available to optimize from, and from that intermediate representation it's able to determine that the same variable has different sizes. When whole program optimization is not enabled the compiler only looks at each source file separately and doesn't see that two different files define different types for that symbol.
All that said I'm 99% sure your program violates the one definition rule, "undefined behavior, no diagnostic required". If you have any chance to fix it you should, against something as simple as a compiler patch breaking your code.
Related
TL;DR
Protection against binary incompatibility resulting from compiler argument typos in shared, possibly templated headers' preprocessor directives, which control conditional compilation, in different compilation units?
Ex.
g++ ... -DYOUR_NORMAl_FLAG ... -o libA.so
/**Another compilation unit, or even project. **/
g++ ... -DYOUR_NORMA1_FLAG ... -o libB.so
/**Another compilation unit, or even project. **/
g++ ... -DYOUR_NORMAI_FLAG ... main.cpp libA.so //The possibilities!
The Basic Story
Recently, I ran into a strange bug: the symptom was a single SIGSEGV, which always seemed to occur at the same location after recompling. This led me to believe there was some kind of memory corruption going on, and the actual underlying pointer is not a pointer at all, but some data section.
I save you from the long and strenuous journey taking almost two otherwise perfectly good work days to track down the problem. Sufficient to say, Valgrind, GDB, nm, readelf, electric fence, GCC's stack smashing protection, and then some more measures/methods/approaches failed.
In utter devastation, my attention turned to the finest details in the build process, which was analogous to:
Build one small library.
Build one large library, which uses the small one.
Build the test suite of the large library.
Only in case when the large library was used as a static, or a dynamic library dependency (ie. the dynamic linker loaded it automatically, no dlopen)
was there a problem. The test case where all the code of the library was simply included in the tests, everything worked: this was the most important clue.
The "Solution"
In the end, it turned out to be the simplest thing: a single (!) typo.
Turns out, the compilation flags differed by a single char in the test suite, and the large library: a define, which was controlling the behavior of the small library, was misspelled.
Critical information morsel: the small library had some templates. These were used directly in every case, without explicit instantiation in advance. The contents of one of the templated classes changed when the flag was toggled: some data fields were simply not present in case the flag was defined!
The linker noticed nothing of this. (Since the class was templated, the resultant symbols were weak.)
The code used dynamic casts, and the class affected by this problem was inheriting from the mangled class -> things went south.
My question is as follows: how would you protect against this kind of problem? Are there any tools or solutions which address this specific issue?
Future Proofing
I've thought of two things, and believe no protection can be built on the object file level:
1: Save options implemented as preprocessor symbols in some well defined place, preferably extracted by a separate build step. Provide check script which uses this to check all compiler defines, and defines in user code. Integrate this check into the build process. Possibly use Levenshtein distance or similar to check for misspellings. Expensive, and the script / solution can get complicated. Possible problem with similar flags (but why have them?), additional files must accompany compiled library code. (Well, maybe with DWARF 2, this is untrue, but let's assume we don't want that.)
2: Centralize build options: cheap, customization option left open (think makefile.local), but makes monolithic monstrosities, strong project couplings.
I'd like to go ahead and quench a few likely flame inducing embers possibly flaring up in some readers: "do not use preprocessor symbols" is not an option here.
Conditional compilation does have it's place in high performance code, and doing everything with templates and enable_if-s would needlessly overcomplicate things. While the above solution is usually not desirable it can arise form the development process.
Please assume you have no control over the situation, assume you have legacy code, assume everything you can to force yourself to avoid side-stepping.
If those won't do, generalize into ABI incompatibility detection, though this might escalate the scope of the question too much for SO.
I'm aware of:
http://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html
DT_SONAME is not applicable.
Other version schemes therein are not applicable either - they were designed to protect a package which is in itself not faulty.
Mixing C++ ABIs to build against legacy libraries
Static analysis tool to detect ABI breaks in C++
If it matters, don't have a default case.
#ifdef YOUR_NORMAL_FLAG
// some code
#elsif YOUR_SPECIAL_FLAG
// some other code
#else
// in case of a typo, this is a compilation error
# error "No flag specified"
#endif
This may lead to a large list of compiler options if conditional compilation is overused, but there are ways around this like defining config-files
flag=normal
flag2=special
which get parsed by build scripts and generate the options and can possibly check for typos or could be parsed directly from the Makefile.
According to this post the Google C++ Testing Framework considers "make install" a bad practice.
http://groups.google.com/group/googletestframework/browse_thread/thread/668eff1cebf5309d
The reason for this is that this library violates the "One Definition Rule".
http://en.wikipedia.org/wiki/One_Definition_Rule
Somewhere further in the thread it says: "If you pass different -DGTEST_HAS_FOO=1 flags to different translation units, you'll violate the ODR. Or sometimes people use -D
to select which malloc library to use (debug vs release), and you have
to use the same malloc library across the board."
My questions:
What is it exactly this project is doing wrong?
What can we learn from this? How can we write code more defensive to prevent violating the ODR?
The straight answer to the question would be: do not write code that depends on compiler parameters. In this case, the whole discussion stems from the fact that the code is different based on compiler flags (most probably by means of #ifdef or other preprocessor directives). That in turn means that while the codebase is the same, by changing compiler flags the processed code will be different, and a binary compiled with one set of flags will not be compatible with a binary compiled with a different set of flags.
Depending on the actual project, it might be impossible to decouple the code from compiler flags, and you will have to live with it, but I would recommend avoiding in as much as possible code that can be configured from the compiler command line, in the same way that you should avoid DEBUG only code with side effects. And when you cannot, document the effect that different compiler flags have.
I am wondering whether there is any difference between inlining functions on a linker level or compiler level in terms of execution speed?
e.g. if I have all my functions in .cpp files and rely on the linker to do inlining, will this inlining potentially be less efficient than say defining some functions in the headers for selected inlining on the compiler level or unity builds without any linking and all inlining done by the compiler?
If the linker is just as efficient, why would one then still bother inlining functions explicitly on the compiler level? Is that just for convenience, say there is just a one line constructor hence one can't be bothered with a .cpp file?
I suppose this might depend on the compiler, in which case I would be most interested in Visual C++ (Windows) and gcc (Linux).
Thanks
The general rule is all else being equal the closer to execution (compiling->linking->(maybe JIT)->execution) the optimization is done the more data the optimizer has and the better optimization it can perform. So unless the optimizer is dumb you should expect better results when inlining is done by the linker - the linker will know more about the invokation context and do better optimization.
Generally, by the time the linker is run, your source has already been compiled into machine code. The linkers job is to take all the code fragments and link then together (possibly fixing addresses along the way). In such a case, there is no room for performing inlining.
But all is not lost. Gcc does provide a mechanism for link time optimization (using the -flto) option when compiling and linking. This causes gcc to produce a byte code that can then be compiled and linked by the linker into a single executable. Since the byte code contains more information than optimized machine code. The linker can now perform radical optimization on the whole codebase. Something that the compiler cannot do.
See here for more details on gcc. Not to sure about VC++ though.
Inlining is normally performed within a single translation unit (.cpp file). When you call functions in another file, they’re never inlined.
Link Time Optimization (LTO) changes this, allowing inlining to work across translation units. It should always be equal or better (sometimes very very significantly) to regular linking in terms of how efficient the generated code is.
The reason both options are still available is that LTO can take a large amount of RAM and CPU – I’ve had VC++ take several minutes on linking a large C++ project before. Sometimes it’s not worth it to enable until you ship. You could also run out of address space with a large enough project, as it has to load all that bytecode into RAM.
For writing efficient code, nothing changes – all the same rules apply with LTO. It is potentially more efficient to explicitly define an inline function in a header file versus depending on LTO to inline it. The inline keyword only provides a hint so there’s no guarantee, but it might nudge it into being inlined where normally (with or without LTO) it wouldn’t be.
If the function is inlined, there would be no difference.
I believe the main reason for having inline functions defined in the headers is history. Another is portability. Until resently most compilers did not do link time code generation, so it having the functions in the headers was a necessity. That of course affects code bases started on more than a couple of years ago.
Also, if you still target some compilers that don't support link time code generation, you dont have a choice.
As an aside, I have in one case been forced to add a pragma to ask one specific compiler not to inline an init() function defined in one .cpp file, but potentially called from many places.
Is there a good reason why this code compiles without warning (and crashes when run) with Visual C++ 2010:
int a = *((int*)nullptr);
Static analysis should conclude that it will crash, right?
Should this use of nullptr produce a compiler error?
No.
Dereferencing a null pointer results in undefined behavior, but no diagnostic is required.
Static analysis should conclude that it will crash, right?
It might. It doesn't have to. It would certainly be nice if a warning was issued. A dedicated static analysis tool (Klocwork, for example) would probably issue a warning.
Yes, static analysis would show this to always crash. However, this would require the compiler to actually perform this static analysis. Most compilers do not do this (at least none I know of).
So the question is: Why don't C/C++ compilers do more static type checking.
The reason the compiler does not do this is mostly: tradition, and a philosophy of making the compiler as simple as possible.
C (and to a lesser degree C++) were created in an environment where computing power was fairly expensive, and where ease of writing a compiler was important (because there were many different HW architectures).
Since static typechecking analysis will both make a compiler harder to write, and make it compile more slowly, it was not felt at the time to be a priority. Thus most compilers don't have it.
Other languages (e.g.) Java make different tradeoffs, and thus in Java many things are illegal that are allowed in C (e.g. unreachable code is a compile-time error in Java; in C most compilers don't even warn). This really boils down to philosophy.
BTW, note that you can get static typechecking in C if you want it - there are several tools available, e.g. lint (ancient), or see What open source C++ static analysis tools are available? .
Consider a situation. We have some specific C++ compiler, a specific set of compiler settings and a specific C++ program.
We compile that specific programs with that compiler and those settings two times, doing a "clean compile" each time.
Should the machine code emitted be the same (I don't mean timestamps and other bells and whistles, I mean only real code that will be executed) or is it allowed to vary from one compilation to another?
The C++ standard certainly doesn't say anything to prevent this from happening. In reality, however, a compiler is normally deterministic, so given identical inputs it will produce identical output.
The real question is mostly what parts of the environment it considers as its inputs -- there are a few that seem to assume characteristics of the build machine reflect characteristics of the target, and vary their output based on "inputs" that are implicit in the build environment instead of explicitly stated, such as via compiler flags. That said, even that is relatively unusual. The norm is for the output to depend on explicit inputs (input files, command line flags, etc.)
Offhand, I can only think of one fairly obvious thing that changes "spontaneously": some compilers and/or linkers embed a timestamp into their output file, so a few bytes of the output file will change from one build to the next--but this will only be in the metadata embedded in the file, not a change to the actual code that's generated.
According to the as-if rule in the standard, as long as a conforming program (e.g., no undefined behavior) cannot tell the difference, the compiler is allowed to do whatever it wants. In other words, as long as the program produces the same output, there is no restriction in the standard prohibiting this.
From a practical point of view, I wouldn't use a compiler that does this to build production software. I want to be able to recompile a release made two years ago (with the same compiler, etc) and produce the same machine code. I don't want to worry that the reason I can't reproduce a bug is that the compiler decided to do something slightly different today.
There is no guarantee that they will be the same. Also according to http://www.mingw.org/wiki/My_executable_is_sometimes_different
My executable is sometimes different, when I compile and recompile the same source. Is this normal?
Yes, by default, and by design, ~MinGW's GCC does not produce ConsistentOutput, unless you patch it.
EDIT: Found this post that seems to explain how to make them the same.
I'd bet it would vary every time due to some metadata compiler writes (for instance, c# compiled dlls always vary in some bytes even if I do "build" twice in a row without changing anything). But anyways, I would never rely on that it would not vary.