Consider the following code
#define foo 38
#define F(foo) G(foo)
F(42);
I expect this code to fail to compile because after applying the first line second line should transform into #define F(38) G(38) which does not make any sense. But on g++ it successfully compiles into G(42), as if there was no first line at all. I was unable to find any mention of this behaviour in neither g++ docs nor c standard. I know that code is ugly and should not be used in the first place, but I wonder if it has any guarantees to be portable.
C 2018 6.10 7 and C++ 2017 (draft n4659) 19 [cpp] 6 both say:
The preprocessing tokens within a preprocessing directive are not subject to macro expansion unless otherwise stated.
For parameters of function-like macros, nothing is otherwise stated. Therefore, the parameter names in the definition of a function-like macro are not replaced due to prior macro definitions.
(Examples of where macro expansion is otherwise stated include #if directives and #include directives.)
Related
#include <iostream>
#define Abc likely
# if __has_cpp_attribute(Abc)
#define Pn 0
#endif
#if __has_cpp_attribute(likely)
#ifndef Pn
#define Pn 1
#endif
#endif
int main(){
std::cout<< Pn;
}
For this example, GCC prints 0 while Clang prints 1. According to [cpp.cond] p5
Each has-attribute-expression is replaced by a non-zero pp-number matching the form of an integer-literal if the implementation supports an attribute with the name specified by interpreting the pp-tokens, after macro expansion, as an attribute-token, and by 0 otherwise. The program is ill-formed if the pp-tokens do not match the form of an attribute-token.
So, the directive # if __has_cpp_attribute(Abc) should behave the same as #if __has_cpp_attribute(likely). GCC has the right behavior. Again, consider this example
#include <iostream>
#define Head <iostream>
# if __has_include(Head)
#define Pn 0
#endif
#ifndef Pn
#define Pn 1
#endif
int main(){
std::cout<< Pn;
}
In this example, both compilers print 0. However, according to [cpp.cond] p4
The header or source file identified by the parenthesized preprocessing token sequence in each contained has-include-expression is searched for as if that preprocessing token sequence were the pp-tokens in a #include directive, except that no further macro expansion is performed. If such a directive would not satisfy the syntactic requirements of a #include directive, the program is ill-formed. The has-include-expression evaluates to 1 if the search for the source file succeeds, and to 0 if the search fails.
Note the bold wording, which means Head won't be replaced by <iostream>, there is no such a source file. Hence, Pn should be 1 instead. Could it be considered a bug of GCC and Clang?
I am not sure that the answer below is correct. I will leave it up for reference for now.
I think the second example does not fit the has-include-expression grammar. If you look at [cpp.cond] there are two forms mentioned, which are further subdivided into multiple cases, referring also to [lex.header].
Collecting the possible forms and combining them here for presentation, we get:
__has_include(<...>)
__has_include("...")
__has_include(string-literal)
with ... as some placeholder and string-literal any string literal. Your form __has_include(Head) is none of these, since Head neither starts with ", nor <, nor is it a string literal.
[cpp.cond]/3 does mention that if the first of the two syntax choices for has-include-expression does not match, the second is considered and the preprocessor tokens are processed like normal text, presumably meaning they are macro-expanded. However it is not clear to me whether this is supposed to reference all preprocessor tokens between ( and ) before the above-mentioned grammar rules are applied or just the h-pp-tokens in the __has_include(<h-pp-tokens>) form. In the former case, the compilers would be correct in returning 0.
However, the latter case makes more sense to me, especially when comparing e.g. to the grammar rule for #include, which uses similar forms, but instead of #include <h-pp-tokens> the last form is #include pp-tokens. [cpp.include]
[cpp.cond]/7 says that the identifier __has_include shall not appear in any context not mentioned in the subclause. I would think that shall not here means otherwise ill-formed, in which case the program should not compile without diagnostic. If it means otherwise undefined behavior, then all compilers are correct.
For the first example, I think you are right. Clang has a recently-fixed bug report regarding the macro expansion here and if you choose Clang trunk on compiler explorer, the result will coincide with GCC already now.
Recently I came across a piece of code similar to this one here
std::map<size_t,std::string> map{
#define RT_OK 0
{RT_OK,"No Error"},
#define RT_SIZE_MISMATCH 1
{RT_SIZE_MISMATCH,"Size Mismatch"}
};
using #define just inside the initializer list.
I was actually surprised that it worked with GCC and it seems to work with CLANG too. Anyway is it ok to use #define inside an initializer list?
It's "OK"1 to put macro definitions anywhere2.
Pre-processor directives are removed by the pre-processor. The compiler sees something like:
std::map<size_t,std::string> map{
// there was a PP directive here
{0,"No Error"},
// there was a PP directive here
{1,"Size Mismatch"}
};
1 In the sense that the program is well-formed. It may sometimes be not OK because it may be confusing to other programmers.
2 Restrictions apply. There must not be any non-whitespace tokens on the same line prior to the directive, and the directive continues until the end of the line.
I am trying to build freetype2 using my own build system (I do not want to use Jam, and I am prepared to put the time into figuring it out). I found something odd in the headers. Freetype defines macros like this:
#define FT_CID_H <freetype/ftcid.h>
and then uses them later like this:
#include FT_CID_H
I didn't think that this was possible, and indeed Clang 3.9.1 complains:
error: expected "FILENAME" or <FILENAME>
#include FT_CID_H
What is the rationale behind these macros?
Is this valid C/C++?
How can I convince Clang to parse these headers?
This is related to How to use a macro in an #include directive? but different because the question here is about compiling freetype, not writing new code.
I will address your three questions out of order.
Question 2
Is this valid C/C++?
Yes, this is indeed valid. Macro expansion can be used to produce the final version of a #include directive. Quoting C++14 (N4140) [cpp.include] 16.2/4:
A preprocessing directive of the form
# include pp-tokens new-line
(that does not match one of the two previous forms) is permitted. The preprocessing tokens after include
in the directive are processed just as in normal text (i.e., each identifier currently defined as a macro name is
replaced by its replacement list of preprocessing tokens). If the directive resulting after all replacements does
not match one of the two previous forms, the behavior is undefined.
The "previous forms" mentioned are #include "..." and #include <...>. So yes, it is legal to use a macro which expands to the header/file to include.
Question 1
What is the rationale behind these macros?
I have no idea, as I've never used the freetype2 library. That would be a question best answered by its support channels or community.
Question 3
How can I convince Clang to parse these headers?
Since this is legal C++, you shouldn't have to do anything. Indeed, user #Fanael has demonstrated that Clang is capable of parsing such code. There must be some problem other problem in your setup or something else you haven't shown.
Is this valid C/C++?
The usage is valid C, provided that the macro definition is in scope at the point where the #include directive appears. Specifically, paragraph 6.10.2/4 of C11 says
A preprocessing directive of the form
# include pp-tokens new-line
(that does not match one of the two previous forms) is permitted. The
preprocessing tokens after include in the directive are processed just
as in normal text. (Each identifier currently defined as a macro name
is replaced by its replacement list of preprocessing tokens.) The
directive resulting after all replacements shall match one of the two
previous forms.
(Emphasis added.) Inasmuch as the preprocessor has the same semantics in C++ as in C, to the best of my knowledge, the usage is also valid in C++.
What is the rationale behind these macros?
I presume it is intended to provide for indirection of the header name or location (by providing alternative definitions of the macro).
How can I convince Clang to parse these headers?
Provided, again, that the macro definition is in scope at the point where the #include directive appears, you shouldn't have to do anything. If indeed it is, then Clang is buggy in this regard. In that case, after filing a bug report (if this issue is not already known), you probably need to expand the troublesome macro references manually.
But before you do that, be sure that the macro definitions really are in scope. In particular, they may be guarded by conditional compilation directives -- in that case, the best course of action would probably be to provide whatever macro definition is needed (via the compiler command line) to satisfy the condition. If you are expected to do this manually, then surely the build documentation discusses it. Read the build instructions.
This question already has an answer here:
Are preprocessor directives allowed in a function-like macro's argument?
(1 answer)
Closed 5 months ago.
I am have a macro TYPELIST which takes variadic arguments. I want to have something like
typedef TYPELIST(A
,B
,C
,D
#ifdef BLA_
,E
#endif
,F)
This works perfectly with gcc. However, when I try to compile it with MSVC it parses ifdef and endif as macro arguments. I know one way would be to put the macro call inside an ifdef. However, if I have a huge list and if I want to include different classes depending on different macros defined, it would become tedious. Is there a particular reason why this works in gcc and not with MSVC?
Using #ifdef inside a macro isn't legal. I am sort of surprised that gcc allows this. I'm afraid you have to put the #ifdef around the entire definition, i.e.
#ifdef BLA_
typedef TYPELIST(a,b,c,d,e,f)
#else
typedef TYPELIST(a,b,c,d,f)
#endif
According to the standard (ยง16.3.4/3), "The resulting completely
macro-replaced preprocessing token sequence is not processed as
a preprocessing directive even if it resembles one,[...]". If
g++ processes the #ifdef/#endif here, it's an error in the
compiler (at least if you've requested standards conformance,
e.g. with -std=...).
I know that this code is valid both in C and C++:
#define FOO 0
#define FOO 0
ISO/IEC 14882:2011
16.3 Macro replacement [cpp.replace]
2 An identifier currently defined as an object-like macro may be
redefined by another #define preprocessing directive provided that the
second definition is an object-like macro definition and the two
replacement lists are identical, otherwise the program is ill-formed.
Likewise, an identifier currently defined as a function-like macro may
be redefined by another #define preprocessing directive provided that
the second definition is a function-like macro definition that has the
same number and spelling of parameters, and the two replacement lists
are identical, otherwise the program is ill-formed.
But what about this code?
#define FOO 0
#define FOO FOO
Replacement lists are not identical at the start of preprocessing (only when the first replacement occurs).
This is not allowed in either C or C++. The replacement list must be identical. What you're talking about (after the first pass) is the result of processing the replacement list1, not the replacement list itself. Since the replacement list itself is not identical, the code is not allowed.
1 Or at least what the result would be if the preprocessor worked a particular way that happens to be different from how it actually does.