My question is: At what phase does the compiler perform name lookup. I think it may be when it perform semantic analysis, but I just want to make sure, since when I was searching about compiler analysis, they didn't mention name lookup in any of this phases (lexical,syntax,semantic).
In C++, name lookup generally needs to be done as part of syntax analysis (parsing), as when names are type names or template names, that affects the syntax. To the extent that things can be parsed indpendently of typenames or template names, name lookup may be deferred until later, but that's generally an implementation detail.
It's semantic analysis in general, but in C++, the stages are all intertwined, so it's understandable if someone says something else.
In C/C++, sometimes you can't decide the semantic meaning of a syntactic element, without knowing it's kind (type, variable, etc.). Most C++ compilers use generated grammar (yacc, bison) for building the syntax tree, so the name lookup must come after that.
consider the following:
A * B;
this is either the declaration of the variable B of type A *, or a statement calling operator * on it arguments A and B.
Related
In C++, a famous parsing ambiguity happens with code like
x<T> a;
Is it if T is a type, it is what it looks like (a declaration of a variable a of type x<T>, otherwise it is (x < T) > a (<> are comparison operators, not angle brackets).
In fact, we could make a change to make this become unambiguous: we can make < and > nonassociative. So x < T > a, without brackets, would not be a valid sentence anyway even if x, T and a were all variable names.
How could one resolve this conflict in Menhir? At first glance it seems we just can't. Even with the aforementioned modification, we need to lookahead an indeterminate number of tokens before we see another closing >, and conclude that it was a template instantiation, or otherwise, to conclude that it was an expression. Is there any way in Menhir to implement such an arbitrary lookahead?
Different languages (including the ones listed in your title) actually have very different rules for templates/generics (like what type of arguments there can be, where templates/generics can appear, when they are allowed to have an explicit argument list and what the syntax for template/type arguments on generic methods is), which strongly affect the options you have for parsing. In no language that I know is it true that the meaning of x<T> a; depends on whether T is a type.
So let's go through the languages C++, Java, Rust and C#:
In all four of those languages both types and functions/methods can be templates/generic. So we'll not only have to worry about an ambiguity with variable declarations, but also function/method calls: is f<T>(x) a function/method call with an explicit template/type argument or is it two relational operators with the last operand parenthesized? In all four languages template/generic functions/methods can be called without template/type when those can be inferred, but that inference isn't always possible, so just disallowing explicit template/type arguments for function/method calls is not an option.
Even if a language does not allow relational operators to be chained, we could get an ambiguity in expressions like this: f(a<b, c, d>(e)). Is this calling f with the three arguments a<b, c and d>e or with the single argument a<b, c, d>(e) calling a function/method named a with the type/template arguments b,c,d?
Now beyond this common foundation, most everything else is different between these languages:
Rust
In Rust the syntax for a variable declaration is let variableName: type = expr;, so x<T> a; couldn't possibly be a variable declaration because that doesn't match the syntax at all. In addition it's also not a valid expression statement (anymore) because comparison operators can't be chained (anymore).
So there's no ambiguity here or even a parsing difficulty. But what about function calls? For function calls, Rust avoided the ambiguity by simply choosing a different syntax to provide type arguments: instead of f<T>(x) the syntax is f::<T>(x). Since type arguments for function calls are optional when they can be inferred, this ugliness is thankfully not necessary very often.
So in summary: let a: x<T> = ...; is a variable declaration, f(a<b, c, d>(e)); calls f with three arguments and f(a::<b, c, d>(e)); calls a with three type arguments. Parsing is easy because all of these are sufficiently different to be distinguished with just one token of lookahead.
Java
In Java x<T> a; is in fact a valid variable declaration, but it is not a valid expression statement. The reason for that is that Java's grammar has a dedicated non-terminal for expressions that can appear as an expression statement and applications of relational operators (or any other non-assignment operators) are not matched by that non-terminal. Assignments are, but the left side of assignment expressions is similarly restricted. In fact, an identifier can only be the start of an expression statement if the next token is either a =, ., [ or (. So an identifier followed by a < can only be the start of a variable declaration, meaning we only need one token of lookahead to parse this.
Note that when accessing static members of a generic class, you can and must refer to the class without type arguments (i.e. FooClass.bar(); instead of FooClass<T>.bar()), so even in that case the class name would be followed by a ., not a <.
But what about generic method calls? Something like y = f<T>(x); could still run into the ambiguity because relational operators are of course allowed on the right side of =. Here Java chooses a similar solution as Rust by simply changing the syntax for generic method calls. Instead of object.f<T>(x) the syntax is object.<T>f(x) where the object. part is non-optional even if the object is this. So to call a generic method with an explicit type argument on the current object, you'd have to write this.<T>f(x);, but like in Rust the type argument can often be inferred, allowing you to just write f(x);.
So in summary x<T> a; is a variable declaration and there can't be expression statements that start with relational operations; in general expressions this.<T>f(x) is a generic method call and f<T>(x); is a comparison (well, a type error, actually). Again, parsing is easy.
C#
C# has the same restrictions on expression statements as Java does, so variable declarations aren't a problem, but unlike the previous two languages, it does allow f<T>(x) as the syntax for function calls. In order to avoid ambiguities, relational operators need to be parenthesized when used in a way that could also be valid call of a generic function. So the expression f<T>(x) is a method call and you'd need to add parentheses f<(T>(x)) or (f<T)>(x) to make it a comparison (though actually those would be type errors because you can't compare booleans with < or >, but the parser doesn't care about that) and similarly f(a<b, c, d>(e)) calls a generic method named a with the type arguments b,c,d whereas f((a<b), c, (d<e)) would involve two comparisons (and you can in fact leave out one of the two pairs of parentheses).
This leads to a nicer syntax for method calls with explicit type arguments than in the previous two languages, but parsing becomes kind of tricky. Considering that in the above example f(a<b, c, d>(e)) we can actually place an arbitrary number of arguments before d>(e) and a<b is a perfectly valid comparison if not followed by d>(e), we actually need an arbitrary amount of lookahead, backtracking or non-determinism to parse this.
So in summary x<T> a; is a variable declaration, there is no expression statement that starts with a comparison, f<T>(x) is a method call expression and (f<T)>(x) or f<(T>(x)) would be (ill-typed) comparisons. It is impossible to parse C# with menhir.
C++
In C++ a < b; is a valid (albeit useless) expression statement, the syntax for template function calls with explicit template arguments is f<T>(x) and a<b>c can be a perfectly valid (even well-typed) comparison. So statements like a<b>c; and expressions like a<b>(c) are actually ambiguous without additional information. Further, template arguments in C++ don't have to be types. That is, Foo<42> x; or even Foo<c> x; where c is defined as const int x = 42;, for example, could be perfectly valid instantiations of the Foo template if Foo is defined to take an integer as a template argument. So that's a bummer.
To resolve this ambiguity, the C++ grammar refers to the rule template-name instead of identifier in places where the name of a template is expected. So if we treated these as distinct entities, there'd be no ambiguity here. But of course template-name is defined simply as template-name: identifier in the grammar, so that seems pretty useless, ... except that the standard also says that template-name should only be matched when the given identifier names a template in the current scope. Similarly it says that identifiers should only be interpreted as variable names when they don't refer to a template (or type name).
Note that, unlike the previous three languages, C++ requires all types and templates to be declared before they can be used. So when we see the statement a<b>c;, we know that it can only be a template instantiation if we've previously parsed a declaration for a template named a and it is currently in scope.
So, if we keep track of scopes while parsing, we can simply use if-statements to check whether the name a refers to a previously parsed template or not in a hand-written parser. In parser generators that allow semantic predicates, we can do the same thing. Doing this does not even require any lookahead or backtracking.
But what about parser generators like yacc or menhir that don't support semantic predicates? For these we can use something known as the lexer hack, meaning we make the lexer generate different tokens for type names, template names and ordinary identifiers. Then we have a nicely unambiguous grammar that we can feed our parser generator. Of course the trick is getting the lexer to actually do that. In order to accomplish that, we need to keep track of which templates and types are currently in scope using a symbol table and then access that symbol table from the lexer. We'll also need to tell the lexer when we're reading the name of a definition, like the x in int x;, because then we want to generate a regular identifier even if a template named x is currently in scope (the definition int x; would shadow the template until the variable goes out of scope).
This same approach is used to resolve the casting ambiguity (is (T)(x) a cast of x to type T or a function call of a function named T?) in C and C++.
So in summary, foo<T> a; and foo<T>(x) are template instantiations if and only if foo is a template. Parsing's a bitch, but possible without arbitrary lookahead or backtracking and even using menhir when applying the lexer hack.
AFAIK C++'s template syntax is a well-known example of real-world non-LR grammar. Strictly speaking, it is not LR(k) for any finite k... So C++ parsers are usually hand-written with hacks (like clang) or generated by a GLR grammar (LR with branching). So in theory it is impossible to implement a complete C++ parser in Menhir, which is LR.
However even the same syntax for generics can be different. If generic types and expressions involving comparison operators never appear under the same context, the grammar may still be LR compatible. For example, consider the rust syntax for variable declaration (for this part only):
let x : Vec<T> = ...
The : token indicates that a type, rather than an expression follows, so in this case the grammar can be LR, or even LL (not verified).
So the final answer is, it depends. But for the C++ case it should be impossible to implement the syntax in Menhir.
In C++, the symbols '<' and '>' are used for comparisons as well as for signifying a template argument. Thus, the code snippet
[...] Foo < Bar > [...]
might be interpreted as any of the following two ways:
An object of type Foo with template argument Bar
Compare Foo to Bar, then compare the result to whatever comes next
How does the parser for a C++ compiler efficiently decide between those two possibilities?
If Foo is known to be a template name (e.g. a template <...> Foo ... declaration is in scope, or the compiler sees a template Foo sequence), then Foo < Bar cannot be a comparison. It must be a beginning of a template instantiation (or whatever Foo < Bar > is called this week).
If Foo is not a template name, then Foo < Bar is a comparison.
In most cases it is known what Foo is, because identifiers generally have to be declared before use, so there's no problem to decide one way or the other. There's one exception though: parsing template code. If Foo<Bar> is inside a template, and the meaning of Foo depends on a template parameter, then it is not known whether Foo is a template or not. The language standard directs to treat it as a non-template unless preceded by the keyword template.
The parser might implement this by feeding context back to the lexer. The lexer recognizes Foo as different types of tokens, depending on the context provided by the parser.
The important point to remember is that C++ grammar is not context-free. I.e., when the parser sees Foo < Bar (in most cases) knows that Foo refers to a template definition (by looking it up in the symbol table), and thus < cannot be a comparison.
There are difficult cases, when you literally have to guide the parser. For example, suppose that are writing a class template with a template member function, which you want to specialize explicitly. You might have to use syntax like:
a->template foo<int>();
(in some cases; see Calling template function within template class for details)
Also, comparisons inside non-type template arguments must be surrounded by parentheses, i.e.:
foo<(A > B)>
not
foo<A > B>
Non-static data member initializers bring more fun: http://open-std.org/JTC1/SC22/WG21/docs/cwg_active.html#325
C and C++ parsers are "context sensitive", in other words, for a given token or lexeme, it is not guaranteed to be distinct and have only one meaning - it depends on the context within which the token is used.
So, the parser part of the compiler will know (by understanding "where in the source it is") that it is parsing some kind of type or some kind of comparison (This is NOT simple to know, which is why reading the source of competent C or C++ compiler is not entirely straight forward - there are lots of conditions and function calls checking "is this one of these, if so do this, else do something else").
The keyword template helps the compiler understand what is going on, but in most cases, the compiler simply knows because < doesn't make sense in the other aspect - and if it doesn't make sense in EITHER form, then it's an error, so then it's just a matter of trying to figure out what the programmer might have wanted - and this is one of the reasons that sometimes, a simple mistake such as a missing } or template can lead the entire parsing astray and result in hundreds or thousands of errors [although sane compilers stop after a reasonable number to not fill the entire universe with error messages]
Most of the answers here confuse determining the meaning of the symbol (what I call "name resolution") with parsing (defined narrowly as "can read the syntax of the program").
You can do these tasks separately..
What this means is that you can build a completely context-free parser for C++ (as my company, Semantic Designs does), and leave the issues of deciding what the meaning of the symbol is to a explicitly seperate following task.
Now, that task is driven by the possible syntax interpretations of the source code. In our parsers, these are captured as ambiguities in the parse.
What name resolution does is collect information about the declarations of names, and use that information to determine which of the ambiguous parses doesn't make sense, and simply drop those. What remains is a single valid parse, with a single valid interpretation.
The machinery to accomplish name resolution in practice is a big mess. But that's the C++ committee's fault, not the parser or name resolver. The ambiguity removal with our tool is actually done automatically, making that part actually pretty nice but if you don't look inside our tools you would not appreciate that, but we do because it means a small engineering team was able to build it.
See an example of resolution of template-vs-less than on C++s most vexing parse done by our parser.
I've been lately using templates and macros, but i have to say i have barely found information about these important types. This is my superficial understanding:
typed/expr is something that must exists previously, but you can use .immediate. to overcome them.
untyped/stmt is something that doesn't to be defined previously/one or more statements.
This is a very vague notion of the types. I'd like to have a better explanation of them, including which types should be used as return.
The goal of these different parameter types is to give you several increasing levels of precision in specifying what the compiler should accept as a parameter to the macro.
Let's imagine a hypothetical macro that can solve mathematical equations. It will be used like this:
solve(x + 10 = 25) # figures out that the correct value for x is 15
Here, the macro just cares about the structure of the supplied AST tree. It doesn't require that the same tree is a valid expression in the current scope (i.e. that x is defined and so on). The macro just takes advantage of the Nim parser that already can decode most of the mathematical equations to turn them into easier to handle AST trees. That's what untyped parameters are for. They don't get semantically checked and you get the raw AST.
On the next step in the precision ladder are the typed parameters. They allow us to write a generic kind of macro that will accept any expression, as long as it has a proper meaning in the current scope (i.e. its type can be determined). Besides catching errors earlier, this also has the advantage that we can now work with the type of the expression within the macro body (using the macros.getType proc).
We can get even more precise by requiring an expression of a specific type (either a concrete type or a type class/concept). The macro will now be able to participate in overload resolution like a regular proc. It's important to understand that the macro will still receive an AST tree, as it will accept both expressions that can be evaluated at compile-time and expressions that can only be evaluated at run-time.
Finally, we can require that the macro receives a value of specific type that is supplied at compile-time. The macro can work with this value to parametrise the code generation. This is realm of the static parameters. Within the body of the macro, they are no longer AST trees, but rather ordinary well typed values.
So far, we've only talked about expressions, but Nim's macros also accept and produce blocks and this is the second axis, which we can control. expr generally means a single expression, while stmt denotes a list of expressions (historically, its name comes from StatementList, which existed as a separate concept before expressions and statements were unified in Nim).
The distinction is most easily illustrated with the return types of templates. Consider the newException template from the system module:
template newException*(exceptn: typedesc, message: string): expr =
## creates an exception object of type ``exceptn`` and sets its ``msg`` field
## to `message`. Returns the new exception object.
var
e: ref exceptn
new(e)
e.msg = message
e
Even thought it takes several steps to construct an exception, by specifying expr as the return type of the template, we tell the compiler that only that last expression will be considered as the return value of the template. The rest of the statements will be inlined, but cleverly hidden from the calling code.
As another example, let's define a special assignment operator that can emulate the semantics of C/C++, allowing assignments within if statements:
template `:=` (a: untyped, b: typed): bool =
var a = b
a != nil
if f := open("foo"):
...
Specifying a concrete type has the same semantics as using expr. If we had used the default stmt return type instead, the compiler wouldn't have allowed us to pass a "list of expressions", because the if statement obviously expects a single expression.
.immediate. is a legacy from a long-gone past, when templates and macros didn't participate in overload resolution. When we first made them aware of the type system, plenty of code needed the current untyped parameters, but it was too hard to refactor the compiler to introduce them from the start and instead we added the .immediate. pragma as a way to force the backward-compatible behaviour for the whole macro/template.
With typed/untyped, you have a more granular control over the individual parameters of the macro and the .immediate. pragma will be gradually phased out and deprecated.
I've been looking for a way to get an ordering on types at compile time. This would be useful, for example, for implementing (efficient) compile-time type-sets.
One obvious way to do it would be if there were a way to map every type to a unique integer. An answer to a previous question on that topic succinctly captures why that's difficult, and it seems like it would apply equally to any other way of trying to get an ordering:
the compiler has no way of knowing all compilation units and the linker has no concept of a type
Indeed, the challenge to the compiler would be considerable: it has to make sure that, in any invocation, for any source file, it returns the same integer for a given type / it returns the same ordering between any two given types, but at the same time, the universe of types is open and it has no knowledge of any types outside of the current file. A hard problem.
The idea I had is that types have names. And by the laws of C++, as far as I know the fully qualified name of a type must be unique across the entire program, otherwise you will get errors or undefined behaviour of some sort or another.
If two types have the same name, then they are the same type.
If two types are the same type, then either they have the same name, or they are typedefs for one another. The compiler has full knowledge of typedefs.
Names are strings, and strings have an ordering. So if I have it right, you could define a globally consistent ordering on types based on their names. More specifically, the ordering between any two types would be the ordering between the names of the types with the typedefs fully resolved. (Having a type behave differently from its typedefs would be problematic.)
Of course, standard C++ doesn't have any facilities for retrieving the names of types.
My questions are:
Do I have anything wrong? Are there any reasons this wouldn't, in theory, work?
Are there any compilers which give you access to the names of types (and ideally their typedef-resolved forms) at compile time as a language extension?
Is there any other way it could be done? Are there any compilers which do?
(I recognize that it's not polite to ask more than one question in the same question, but it seemed strange to post three separate questions with the same basic throat-clearing preceding them.)
the fully qualified name of a type must be unique across the entire program
But of course, that's only true if you consider seperate anonymous namespaces in different translation units to have different names in some sense, and have some way to figure out what they are.
The only sense in which I'm aware they really do have different names is in mangled linker symbols; you may (depending on the compiler) be able to get that from type_info::name(), but it isn't guaranteed, is limited to types with RTTI, and anyway doesn't seem to be declared as a constexpr so you can't use the value at compile time.
The ordering produced by type_info::before() naturally has the same limitations.
Out of interest, what are you trying to achieve with your compile-time type ordering?
Where I can find a list of the rules that a C++ compliant compiler must apply in order to perform names resolution (including overloading)?
I'd like something like a natural-language algorithm or flow chart.
C++ standard of course has this set of rules but it is build up as new language statements are introduced and the result it's pretty hard to remember.
To make a long story short, I'd like to know the complete and detailed answer to the question "What compiler do when it see the name 'A'?"
I know C++ is all "We do this when X but not Y if Z holds" so, I'm asking whether it is possible to make it more linear.
EDIT: I'm working on a draft of this topic, something that may be improved collectively once posted. However i'm very busy this days and it may take time to have something publicable. If someone interested i'll promote the "personal note on a raw txt file" to something better and post it.
Well, in broad strokes:
If the name is preceded by ::, as in ::A or X::A, then use qualified name lookup. First look up X, if it exists (if not use the global namespace) then look inside it for A. If X is a class, and A is not a direct member, then look in all the direct bases of X. If A is found in more than one base, fail.
Otherwise, if the name is used as a function call such as A( X ), use argument-dependent lookup. This is the hard part. Look for A in the namespace the type of X was declared in, in the friends of X, and if X is a template instantiation, likewise for all the arguments involved. Scopes associated only by typedef do not apply. Do this in addition to unqualified lookup.
Start with unqualified lookup if argument-dependent lookup doesn't apply. This is the usual way variables are found. Start at the current scope and work outwards until the name is found. Note that this respects using namespace directives, which the other two cases do not.
Simply glancing at the Standard will reveal many exceptions and gotchas. For example, unqualified lookup is used to determine whether the name is used as a function call, as opposed to a cast-expression, before ADL is used to generate a list of potential overloads. Unqualified lookup doesn't look for objects in enclosing scopes of nested of local classes, because such objects might not exist at the time of reference.
Apply common sense, and ask more specific questions when (as often it does) intuition fails.