I have two header files. One calls the other after #define-ing a symbol used in the other one with #ifdef [symbol]... lotsa code #endif.
In VS2017, the code between the #ifdef & #endif shows as 'active' (not greyed out), so it thinks [symbol] is true and includes the code in the active code set.
However, If I comment out the #ifdef [symbol] and #endif lines, the result of the compile changes drastically - how can this be? I thought the #ifdef & #endif macros were pre-processor directives and don't exist at all for purposes of compiling the code. If #ifdef [symbol] evaluates as true, I thought they just disappeared. What am I missing?
First header file:
#ifndef _MPU6050_6AXIS_MOTIONAPPS20_H_
#define _MPU6050_6AXIS_MOTIONAPPS20_H_
#include "I2Cdev.h"
#include "helper_3dmath.h"
// MotionApps 2.0 DMP implementation, built using the MPU-6050EVB evaluation board
#define MPU6050_INCLUDE_DMP_MOTIONAPPS20
#include "MPU6050.h"
In the second header file (MPU6050.H), there is a long class definition for 'Class MPU6050' and this definition has a block guarded by
#ifdef MPU6050_INCLUDE_DMP_MOTIONAPPS20
....
....
....
#endif
This (I believe) is the source of some 'one definition rule' violations I've been trying to track down, so as an experiment I commented out the #define & #endif lines, thinking now there would be only one possible definition for 'Class MPU6050' and life would be good. What actually happened is the compiler blew a whole lot of 'undefined symbol' errors, as if the 'MPU6050_INCLUDE_DMP_MOTIONAPPS20' symbol hadn't been defined after all and the guard lines were preventing the guarded code from being compiled, even though VS2017's Intellisense shows it 'active'.
Since the symbol 'MPU6050_INCLUDE_DMP_MOTIONAPPS20' is defined, the guarded code in MPU6050.H should be compiled whether or not the #ifdef & #endif lines are actually there, right??
What am I missing?
The actual files are MPU6050_6Axis_MotionApps20.h and MPU6050.H from the Arduino\MP6050 folder at Jeff Rowberg's i2cDev GitHub account
TIA,
Frank
02/06/19 Addition: As a test, I placed a 'known-bad' line in the #ifdef/#endif guarded block, and the compiler flagged the line as an error. I believe this proves that the guarded block is indeed 'active' from the point of view of the compiler.
In addition, I commented out the line in MPU6050_6Axis_MotionApps20.h that #defined the 'MPU6050_INCLUDE_DMP_MOTIONAPPS20' symbol. Now the editor shows the guarded code as 'inactive' (grayed out), and the compiler acts consistently with that (complains that is is missing the function declarations from the guarded code section).
So, I have what appears to be a paradox; The compiler believes that the guarded code is active, but believes it disappears when I comment out the #ifdef & #endif lines, which should have been removed by the pre-processor ever saw the guarded code.
Any ideas?
Related
As I understand, when compiling a compilation unit, the compiler's preprocessor translates #include directives by expanding the contents of the header file1 specified between the < and > (or ") tokens into the current compilation unit.
It is also my understanding, that most compilers support the #pragma once directive guarding against multiply defined symbols as a result of multiple inclusion of the same header. The same effect can be produced by following the include guard idiom.
My question is two-fold:
Is it legal for a compiler to completely ignore an #include directive if it has previously encountered a #pragma once directive or include guard pattern in this header?
Specifically with Microsoft' compiler is there any difference in this regard whether a header contains a #pragma once directive or an include guard pattern? The documentation suggests that they are handled the same, though some user feels very strongly that I am wrong, so I am confused and want clarification.
1 I'm glossing over the fact, that headers need not necessarily be files altogether.
It the compiled program cannot tell whether the compiler has ignored a header file or not, it is legal under the as-if rule to either ignore or not ignore it.
If ignoring a file results in a program that has observable behaviour different from a program produced by processing all files normally, or ignoring a file results in an invalid program whereas processing it normally does not, then it is not legal to ignore such file. Doing so is a compiler bug.
Compiler writers seem to be confident that ignoring a once-seen file that has proper include guards in place can have no effect on the resulting program, otherwise compilers would not be doing this optimisation. It is possible that they are all wrong though, and there is a counterexample that no one has found to date. It is also possible that non-existence of such counterexample is a theorem that no one has bothered to prove, as it seems intuitively obvious.
I think you can treat #pragma once as compiler language extension like for instance #pragma omp parallel that can make a loop execute in parallel causing all kinds of UB if it is not written correctly.
The standard says it is ok for pragma directive to cause implementation-defined non-conforming result:
Pragma directive [cpp.pragma]
...
causes the implementation to behave in an implementation-defined manner. The behavior might cause
translation to fail or cause the translator or the resulting program to behave in a non-conforming manner.
Any pragma that is not recognized by the implementation is ignored.
Regarding MSVC behavior you can think of it skipping the header based on its normalized path.For instance you can trick the compiler with symlinks:
test/test.h
#pragma once
static int x = 2;
Create symlink "test-link" to "test" directory:
mklink /d test-link test
Then in main.cpp:
#include "test/test.h"
#include "test/test.h"
#include "test/../test/test.h"
is ok. but
#include "test/test.h"
#include "test-link/test.h"
causes
error C2374: 'x': redefinition; multiple initialization
which would not happen in case of include guards.
Is it legal for a compiler to completely ignore an #include directive if it has previously encountered a #pragma once directive or include guard pattern in this header?
That depends on how #pramga once is defined and implemented by the compilers. It is after all a none standard feature.
But, all compilers I know that support #pramga once treat it like a non-mutable unique include guard that wraps around the complete file.
After the preprocessor resolved the include path for an include, it can check if that file was already included and if #pargma once exists for that file. If both conditions are true, it is safe to not include the file anymore, because it would follow the as-if rule, as the compiler vendor is in full controller over how the #pramga once is implemented and can ensure that the lock guard is unique, non-mutable, and wraps the whole file, and due to that a repeated inclusion of that same wile would result in an empty content that is included.
So with that respect, if they didn't make an implementation error it is safe to then ignore the include.
There is the argument against the usage of #pragma once that says that the compiler might treat the same file as different files due to symlinks and hard links. That would result in accidentally including the same file multiple times, but that won't affect the part of whether it is safe to ignore it if the compile identified it as the same file.
Specifically with Microsoft' compiler is there any difference in this regard whether a header contains a #pragma once directive or an include guard pattern? The documentation suggests that they are handled the same, though some user feels very strongly that I am wrong, so I am confused and want clarification.
If no #pragma once is used it becomes more complicated. The preprocessor needs to first check if the lock guard wraps around all contents:
#ifndef SOME_GUARD_NAME_H
#define SOME_GUARD_NAME_H
// all content of the file
#endif
Or if it is something like this:
// some content before the guard
#ifndef SOME_GUARD_NAME_H
#define SOME_GUARD_NAME_H
// some content
#else
// some more content
#endif
// some other content after the guard
And it needs to keep track of whether the SOME_GUARD_NAME_H was already defined in another file or if #undef was called by another file.
So in that case it can only ignore the content of the file if it can ensure that all relevant defines are the same and/or if the evaluation of the macros results in an empty file.
Is it legal for a compiler to completely ignore an #include directive if it has previously encountered a #pragma once directive or include guard pattern in this header?
Of course it is! It is even legal for the compiler to ignore all your source files and header files so long as behavior of the generated code is the same as if it processed everything. That's how pre-compiled headers and object files work - anything that hasn't changed can be safely ignored. Similarly, if the compiler can prove that including and not including the file are going to have exactly the same behavior, the compiler may ignore the file, regardless of the pre-processor directives.
Specifically with Microsoft' compiler is there any difference in this regard whether a header contains a #pragma once directive or an include guard pattern?
The documentation is pretty clear on that. They are identical assuming the compiler manages to identify the idiom and you haven't #undefed the macro. I've never experienced any bugs related to that either. #pragma once is safer though. I have had an instance where two headers had the same include guard and debugging that wasn't a nice experience.
#pragma once obviously refers to the file as a whole
The use of #pragma once can reduce build times, as the compiler won't
open and read the file again after the first #include of the file in
the translation unit.
really - if not to file - for what it can be related ?
the conditional compilation , so called guard idiom related not to file but to block of code. really - where, how stated that some condition related to file ?!
it related to block beginning with #if* and ended with #endif. compiler anyway need include this file again.
let do some tests. also here will be very useful cl(msvc) compiler option /showIncludes
let create header.h
// header.h
#ifndef HEADER_H_
#define HEADER_H_
int g_a = 0;
#endif
and then
#include "header.h"
#include "header.h"
only once in log
1>Note: including file: .\header.h
so header.h really included only once here.
but if do this
// header.h
#if !defined HEADER_H_
#define HEADER_H_
int g_a = 0;
#endif
or this
#if !defined(HEADER_H_)
#define HEADER_H_
int g_a = 0;
#endif
and
#include "header.h"
#include "header.h"
already 2 lines in log - header.h included 2 time.
1>Note: including file: .\header.h
1>Note: including file: .\header.h
so #ifndef HEADER_H_ have different effect compare #if !defined(HEADER_H_)
or if do
// header.h
#ifndef HEADER_H_
#define HEADER_H_
int g_a = 0;
#endif
#define XYZ
or
// header.h
#if __LINE__ // any not empty statement
#endif
#ifndef HEADER_H_
#define HEADER_H_
int g_a = 0;
#endif
and
#include "header.h"
#include "header.h"
already
1>Note: including file: .\header.h
1>Note: including file: .\header.h
again 2 lines in log - header.h included 2 time.
so if exist any not empty ( comments, sequences of whitespace characters (space, tab, new-line)) statement outside first conditional block - file already included more times.
of course possible and do next
// header.h
#include "header.h"
#undef HEADER_H_
#include "header.h"
in this case
1>Note: including file: .\header.h
1>Note: including file: .\header.h
1>.\header.h(4): error C2374: 'g_a': redefinition; multiple initialization
1>.\header.h(4): note: see declaration of g_a'
and of course in case
// header.h
#pragma once
int g_b = 0;
and
#include "header.h"
#include "header.h"
only single line
1>Note: including file: .\header.h
so based on tests can make next conclusion - if cl(msvc) - view that file have pattern
#ifndef macro // but not #if !defined macro
#define macro
// all code only here
#endif
macro associates with the file and then as long as it is not undefined - the file will not be included more. this is implicit optimization by specific compiler. and very fragile. any not white space or comment statement break it. even despite documented that #ifndef HEADER_H_ quivalently to #if !defined HEADER_H_ - by fact this is not true.
I have a situation in which #ifdef is telling me that something is not defined, and yet it proceeds to compile a line as if it is defined. I can't figure out how that can be.
The wider context is that I am banging my head against a brick wall trying to resolve this question. I am trying to determine the point at which SERVICE_STATUS is defined in both a project that compiles, and one that doesn't. As far as Visual Studio 2015 is concerned, if I right click on the word SERVICE_STATUS and go to its definition, in both cases I am taken to the same file: winsvc.h. So I don't think that paths are an issue.
All of which has led me to introducing the following five-line check in various places to see if I can understand where SERVICE_STATUS is included/defined:
#if defined(SERVICE_STATUS) // or #ifdef SERVICE_STATUS
#pragma message( "SS is defined" )
#else
#pragma message( "SS is NOT defined" )
#endif
In all cases, including just prior to the point where SERVICE_STATUS is used in both the compiling and the non-compiling cases, the only message ever printed is SS is NOT defined. This is the line that does or does not compile, and I can prove it is being parsed by causing deliberate errors immediately before it:
static SERVICE_STATUS _serviceStatus;
Has my head been banged too many times to notice the obvious schoolboy error in my usage of #ifdef or #if defined() (I've tried both)? Alternatively, how this could possibly compile if SERVICE_STATUS is undefined?
Here's a sketch of the context in which this occurs (in the file serverapplication.h from the Poco distribution at pocoproject.org):
... // lots of irrelevant lines omitted...
[5LC] // apply my 5-line check
#include <ThisFile>
#include <ThatFile>
[5LC] // check again and find no change due to header inclusion
... // many more lines skipped...
[5LC] // one last check prior to using SERVICE_STATUS - still no change
static SERVICE_STATUS _serviceStatus; // variable results!?!?
If you want the full context, see the problem I'm trying to fix for links to a zipped project in which I have been able to reproduce this problem independent of my own project... but please respond to that issue separately.
SERVICE_STATUS is a type definition that was defined with typedef; the answer in your other question contains a link to the documentation of it.
#ifdef is only used to determine if a name was defined as a macro with #define (or via compiler options), you can't use it to test whether a type, variable, or function has been defined. It's part of the preprocessor, not the compiler, so it doesn't know anything about those elements of the code.
Those two preprocessor directives tell you only, whether SERVICE_STATUS is defined as a preprocessor macro, i.e. if there is a line saying #define SERVICE_STATUS something somewhere before the #ifdef. That is not the same as "defined" in the non-preprocessor sense.
SERVICE_STATUS is a struct, not a macro (despite the horribly misleading name). So it can't be detected by the preprocessor.
I tried to use a library on visual studio in different ways by modify its macros on preprocessor directives. However a logic block inside an #if directive is shown to me inactive as it was comment. Here is the code:
#if defined EBML_DLL
#if defined EBML_DLL_EXPORT
#define EBML_DLL_API __declspec(dllexport)
#else // EBML_DLL_EXPORT
#define EBML_DLL_API __declspec(dllimport)
#endif // EBML_DLL_EXPORT
#else // EBML_DLL
#define EBML_DLL_API
#endif // EBML_DLL
The problems is that visual studio shows the code within if ebml_dll block inactive (as commented). As result, the dll doesn't show the functions in the object browser of VS.
A Hint: if a backslash is added at the end of #if defined EBML_DLL's line, it active the else block only.
There was a bug in older versions of VS about this, but it was just a display issue. VS was not reading the defines correctly (in your case EBML_DLL, etc).
It could also be that the constants you are using in your preprocessor statements are not correct and the are missing characters (usually the ones the compiler uses have underscores at the beginning and end)
To really know for sure which one it is, you can add a random string inside the branch the preprocessor is expected to take and see if the code compiles.
#if defined EBML_DLL
this_should_not_compile //you should get an error on this line
#endif
Hope this helps...
Normally, for classes I don't intend to include in production code I have conditional operators such as the usual:
#ifdef DEBUG_VERSION
This could also be around certain chunks of code that performs additional steps in development mode.
I've just thought (after many years or using the above): What happens if a typo is introduced in the above? It could have great consequences. Pieces of code included (or not included) when the opposite was intended.
So I'm now wondering about alternatives, and thought about creating 2 macro's:
INCLUDE_IN_DEBUG_BUILD
END_INCLUDE_IN_DEBUG_BUILD
If a typo is ever created in these, an error message is created at compile time, forcing the user to correct it. The first would evaluate to "if (1){" in the debug build and "if (0){" in the production build, so any compiler worth using should optimise those lines out, and even if they don't, at least the code inside will never be called.
Now I'm wondering: Is there something I'm missing here? Why does no-one else use something like this?
Update: I replaced the header-based approach with a build-system based approach.
You want to be able to disable not just part of the code inside a function, but maybe also in other areas like inside a class or namespace:
struct my_struct {
#ifdef DEBUG_VERSION
std::string trace_prefix;
#endif
};
So the real question seems to be: How to prevent typos in your #ifdefs? Here's something which does not limit you and which should work well.
Modify your build system to either define DEBUG_VERSION or RELEASE_VERSION. It should be easy to ensure this. Define those to nothing, e.g. -DDEBUG_VERSION or -DRELEASE_VERSION for GCC/Clang.
With this, you can protect your code like this:
#ifdef DEBUG_VERSION
DEBUG_VERSION
// ...
#endif
or
#ifndef DEBUG_VERSION
DEBUG_VERSION
// ...
#else
RELEASE_VERSION
// ...
#endif
And voila, in the second example above, I already added a small typo: #ifndef instead of #ifdef - but the compiler would complain now as DEBUG_VERSION and RELEASE_VERSION are not defined (as in "defined away" by the header) in the corresponding branches.
To make it as safe as possible, you should always have both branches with the two defines, so the first example I gave should be improved to:
#ifdef DEBUG_VERSION
DEBUG_VERSION
// ...
#else
RELEASE_VERSION
#endif
even if the release branch contains no other code/statements. That way you can catch most errors and I think it is quite descriptive. Since the DEBUG_VERSION is replaced with nothing only in the debug branch, all typos will lead to a compile-time error. The same for RELEASE_VERSION.
I occasionally write code something like this:
// file1.cpp
#define DO_THIS 1
#if DO_THIS
// stuff
#endif
During the code development I may switch the definition of DO_THIS between 0 and 1.
Recently I had to rearrange my source code and copy some code from one file to another. But I found that I had made a mistake and the two parts had become separated like so:
// file1.cpp
#define DO_THIS 1
and
// file2.cpp
#if DO_THIS
// stuff
#endif
Obviously I fixed the error, but then thought to myself, why didn't the compiler warn me? I have the warning level set to 4. Why isn't #if X suspicious when X is not defined?
One more question: is there any systematic way I could find out if I've made the same mistake elsewhere? The project is huge.
EDIT: I can understand having no warning with #ifdef that makes perfect sense. But surely #if is different.
gcc can generate a warning for this, but its probably not required by the standard:
-Wundef
Warn if an undefined identifier is evaluated in an `#if' directive.
Again, as it often happens, the answer to the "why" question is just: it was done that way because some time ago it was decided to do it this way. When you use an undefined macro in an #if it is substituted with 0. You want to know whether it is actually defined - use defined() directive.
There some interesting benefits to that "default to 0" approach though. Especially when you are using macros that might be defined by the platform, not your own macros.
For example, some platforms offer macros __BYTE_ORDER, __LITTLE_ENDIAN and __BIG_ENDIAN to determine their endianness. You could write preprocessor directive like
#if __BYTE_ORDER == __LITTLE_ENDIAN
/* whatever */
#else
/* whatever */
#endif
But if you try to compile this code on a platform that does not define these non-standard macros at all (i.e. knows nothing about them), the above code will be translated by preprocessor into
#if 0 == 0
...
and the little-endian version of the code will be compiled "by default". If you wrote the original #if as
#if __BYTE_ORDER == __BIG_ENDIAN
...
then the big-endian version of the code would be compiled "by default".
I can't say that #if was defined as it was specifically for tricks like the above, but it comes useful at times.
When you can't use a compiler that has a warning message (like -Wundef in gcc), I've found one somewhat useful way to generate compiler errors.
You could of course always write:
#ifndef DO_THIS
error
#endif
#if DO_THIS
But that is really annoying
A slightly less annoying method is:
#if (1/defined(DO_THIS) && DO_THIS)
This will generate a divide by zero error if DO_THIS is undefined. This method is not ideal because the identifier is spelled out twice and a misspelling in the second would put us back where we started. It looks weird too. It seems like there should be a cleaner way to accomplish this, like:
#define PREDEFINED(x) ((1/defined(x)) * x)
#if PREDEFINED(DO_THIS)
but that doesn't actually work.
If you're desperate to prevent this kind of error, try the following which uses preprocessor token-pasting magic and expression evaluation to enforce that a macro is defined (and 0 in this example):
#define DEFINED_VALUE(x,y) (defined (y##x) ? x : 1/x)
#if DEFINED_VALUE(FEATURE1,) == 0
There is recursive issue. In case you have
#define MODEL MODEL_A
#if (MODEL == MODEL_B)
// Surprise, this is compiled!
#endif
where definition of MODEL_A and MODEL_B are missing, then it will compile.
#ifdef MODEL
#error Sorry, MODEL Not Defined
// Surprise, this error is never reached (MODEL was defined by undefined symbol!)
#endif
#ifdef MODEL_B
#error Sorry, MODEL_B Not Defined
// This error is reached
#endif
If DO_THIS is yours definition then simple and working solution seems to be usage of Function-like Macro:
#define DO_THIS() 1
#if DO_THIS()
//stuff
#endif
I tested this under Visual Studio 2008, 2015 and GCC v7.1.1.
If DO_THIS() is undefined VS gererates:
warning C4067: unexpected tokens following preprocessor directive - expected a newline
and GCC generates
error: missing binary operator before token "("
The compiler didn't generate a warning because this is a preprocessor directive. It's evaluated and resolved before the compiler sees it.
If I'm thinking about this correctly.
Preprocessor directives are handled before any source code is compiled. During that phase(s) of translation in which this occurs all preprocessor directives, macros, etc are handled and then the actual source code is compiled.
Since #if is used to determine if X has been defined and carry out some action if it has or has not been defined. The #if in the code snippet would compile without any errors because there aren't any errors as far as the compiler is concerned. You could always create a header file with specific #defines that your application would need and then include that header.