What are "superset of valid C++ constructs" from Annex A ?
Also, any guide which will help you read this grammar in Annex A ?
Annex A quote (donot block quote the following as it messes up the angle brackets):
This summary of C++ syntax is intended to be an aid to comprehension. It is not an exact statement
of the language. In particular, the grammar described here accepts a superset of valid C++ constructs.
Disambiguation rules (6.8, 7.1, 10.2) must be applied to distinguish expressions from declarations. Further,
access control, ambiguity, and type rules must be used to weed out syntactically valid but meaningless
constructs.
Here is one short example that is valid according to the grammar, but not according to the full language rules:
int a[];
struct s;
void main(foo bar)
{
return (sizeof a) + sizeof (s);
}
The primary issue is that the grammar is expressed using context-free productions, but C++ syntactic parse is highly contextual.
If S is a set of elements, a superset is another set X such that each element s in S is also an element of X, but there may be elements x in X that are not elements of S.
As an example, {1,2,3} is a set of 3 numbers. {1,2,3,4} is a superset of the first set -- it contains the elements in {1,2,3}, but also an extra element 4.
So the grammar listed in Annex A will match C++, but will also match things that are not valid C++.
It then goes on to list some issues you have to solve "outside of the grammar" -- the disambiguation rules, access control, ambiguity, and type rules.
The quote implies, lightly, that this is a complete set of things you must consider to distinguish valid C++ from things matched by the grammar, but does not explicitly say so. I am uncertain if this light implication is actually intended or not.
Related
According to the draft of the C++ 11 standard, page 421, Table 23 — CopyAssignable, it says that post-condition of an expression t = v of copyassignable type is
t is equivalent to v, the value of v is unchanged
But I'm not sure what the term 'equivalent' means here. Is it mean t == v? Or something like all bytes are equivalent 'deeply' in sense of 'deepcopy'?
As far as I can tell, there is no separate definition of the term "equivalent" that would apply in the standard. It may be interpreted as plain English. Here are a few definitions from dictionaries:
corresponding or virtually identical especially in effect or function
equal to or having the same effect as something
something that is the same amount, price, size, etc. as something else or has the same purpose as something else
Another interpretation is that your quote is the definition for equivalent in this context i.e. the meaning of equality for the type in question is defined by the assignment operator of that type.
I'll try to explain my question with an example. Consider the following grammar production in the C++ standard:
literal:
integer-literal
character-literal
floating-point-literal
string-literal
boolean-literal
pointer-literal
user-defined-literal
Once the parser identifies a literal as an integer-literal, I always thought that the parser would just stop there. But I was told that this is not true. The parser will continue parsing to verify whether the literal could also be matched with a user-defined-literal, for example.
Is this correct?
Edit
I decided to include this edit as my interpretation of the Standard, in response to #rici's excellent answer below, although with a result that is the opposite of the one advocated by the OP.
One can read the following in [stmt.ambig]/1 and /3 (emphases are mine):
[stmt.ambig]/1
There is an ambiguity in the grammar involving
expression-statements and declarations: An expression-statement with a
function-style explicit type conversion as its leftmost subexpression
can be indistinguishable from a declaration where the first declarator
starts with a (. In those cases the statement is a declaration.
That is, this paragraph states how ambiguities in the grammar should be treated. There are several other ambiguities mentioned in the C++ Standard, but only three that I know are ambiguities related to the grammar, [stmt.ambig], [dcl.ambig.res]/1, a direct consequence of [stmt.ambig] and [expr.unary.op]/10, which explicitly states the term ambiguity in the grammar.
[stmt.ambig]/3:
The disambiguation is purely syntactic; that is, the meaning of the
names occurring in such a statement, beyond whether they are
type-names or not, is not generally used in or changed by the
disambiguation. Class templates are instantiated as necessary to
determine if a qualified name is a type-name. Disambiguation
precedes parsing, and a statement disambiguated as a declaration may
be an ill-formed declaration. If, during parsing, a name in a template
parameter is bound differently than it would be bound during a trial
parse, the program is ill-formed. No diagnostic is required. [ Note:
This can occur only when the name is declared earlier in the
declaration. — end note ]
Well, if disambiguation precedes parsing there is nothing that could prevent a decent compiler to optimize parsing by just considering that the alternatives present in each definition of the grammar are indeed ordered. With that in mind, the first sentence in [lex.ext]/1 below could be eliminated.
[lex.ext]/1:
If a token matches both user-defined-literal and another literal kind,
it is treated as the latter. [ Example: 123_km is a
user-defined-literal, but 12LL is an integer-literal. — end example ]
The syntactic non-terminal preceding the ud-suffix in a
user-defined-literal is taken to be the longest sequence of characters
that could match that non-terminal.
Note also that this paragraph doesn't mention ambiguity in the grammar, which for me at least, is an indication that the ambiguity doesn't exist.
There is no implicit ordering of productions in the C++ presentation grammar.
There are ambiguities in that grammar, which are dealt with on a case-by-case basis by text in the standard. Note that the text of the the standard is normative; the grammar does not stand alone, and it does not override the text. The two need to be read together.
The standard itself points out that the grammar as resumed in Appendix A:
… is not an exact statement of the language. In particular, the grammar described here accepts a superset of valid C++ constructs. Disambiguation rules (8.9, 9.2, 11.8) must be applied to distinguish expressions from declarations. Further, access control, ambiguity, and type rules must be used to weed out syntactically valid but meaningless constructs. (Appendix A, paragraph 1)
That's not a complete list of the ambiguities resolved in the text of the standard, because there are also rules about lexical ambiguities. (See below.)
Almost all of these ambiguity resolution clauses are of the form "if both P and Q applies, choose Q", and thus would be unnecessary were there an implicit ordering of grammar alternatives, since the correct parse could be guaranteed simply by putting the alternatives in the correct order. So the fact that the standard feels the need to dedicate a number of clauses to ambiguity resolution is prima facie evidence that alternatives are not implicitly ordered. [Note 1]
The C++ standard does not explicitly name the grammar formalism being used, but it does credit the antecedents which allows us to construct a historical argument. The formalism used by the C++ standard was inherited from the C standard and the description in Kernighan & Ritchie's original book on the (then newly-minted) C language. K&R wrote their grammar using the Yacc parser generator, and the original C grammar is basically a Yacc grammar file. Yacc uses the LALR(1) algorithm to construct a parser from a context-free grammar (CFG), and its grammar files are a concrete representation of that grammar written in what has come to be known as BNF (although there is some historical ambiguity about what the letters in BNF actually stand for). BNF does not have any implicit ordering of rules, and the formalism does not allow any way to write an explicit ordering or any other disambiguation rule. (A BNF grammar must be unambiguous in order to be mechanically parsed; if it is ambiguous, the LALR(1) algorithm will fail to generate a parser.)
Yacc does go a bit outside of the box. It has some automatic disambiguation rules, and one mechanism to provide explicit disambiguation (operator precedence). But Yacc's disambiguation has nothing to do with the ordering of alternatives either.
In short, ordered alternatives were not really a feature of any grammar formalism until 2002 when Bryan Ford proposed packrat parsing, and subsequently formalised a class of grammars which he called "Parsing Expression Grammars" (PEGs). The PEG algorithm does implicitly order alternatives, by insisting that the right-hand alternative in an alternation only be attempted if the left-hand alternative failed to match. For this reason, the PEG alternation operator (or "ordered alternation" operator) is generally written as / instead of |, avoiding confusion with the traditional unordered alternation syntax.
A key feature of the PEG algorithm is that it is always deterministic. Every PEG grammar can be deterministically applied to a source text without ambiguity. (That doesn't mean that the grammar will give you the parse you wanted, of course. It just means that it will never give you a list of parses and let you select the one you want.) So grammars written in PEG cannot be accompanied by textual rules which disambiguate, because there are no ambiguities.
I mention this because the existence and popularity of PEG have to some extent altered the perception of the meaning of the alternation operator. Before PEG, we probably wouldn't be having this kind of discussion at all. But using PEG as a guide to interpreting the C++ grammar formalism is ahistoric and unjustifiable; the roots of the C++ grammar go back to at least 1978, at least a quarter of a century before PEG.
Lexical ambiguities, and the clauses which resolve them
[lex.pptoken] (§5.4) paragraph 3 lays down the fundamental rules for token recognition, which is a little more complicated than the traditional "maximal munch" principle which always recognises the longest possible token starting immediately after the previously recognised token. It includes two exceptions:
The sequence <:: is treated as starting with the token < rather than the longer token <: unless it is the start of <::> (treated as <:, :>) or <::: (treated as <:, ::). That might all make more sense if you mentally replace <: with [ and :> with ], which is the intended syntactic equivalence.
A raw string literal is terminated by the first matching delimiter sequence. This rule could in theory be written in a context-free grammar only because there is an explicit limit on the length of termination sequences, which means that the theoretical CFG would have on the order of 8816 rules, one for each possible delimiter sequence. In practice, this rule cannot be written as such, and it is described textually, along with the 16-character limit on the length of the d-char-sequence.
[lex-header] (§5.8) avoids the ambiguity between header-names and string-literals (as well as certain token sequences starting with <) by requiring header-name to only be recognised in certain contexts, including an #include preprocessing directive. (The section does not actually say that the string-literal should not be recognised, but I think that the implication is clear.)
[lex.ext] (§5.13.8) paragraph 1 resolves the ambiguities involved with user-defined-literals, by requiring that:
the user-defined-literal rule is only recognised if the token cannot be recognised as some other kind of literal, and
the decomposition of the user-defined-literal into a literal followed by a ud-suffix follows the longest-token rule, described above.
Note that this rule is not really a tokenisation rule, because it is applied after the source text has been divided into tokens. Tokenisation is done in translation phase 3, after which the tokens are passed through the preprocessing directives (phase 4), rewriting of escape sequences and UCNs (phase 5), and concatenation of string literals (phase 6). Each token which emerges from phase 6 must then be reinterpreted as a token in the syntactic grammar, and it is at that point that literal tokens will be classified. So it's not necessary that §5.13.8 clarify what the extent of the token being categorised is; the extent is already known and the converted token must use exactly all of the characters in the preprocessing token. Thus it's quite different from the other ambiguities in this list, but I left it here because it is so present in the original question and in the various comment threads.
Notes:
Curiously, in almost all of the ambiguity resolution clauses, the preferred alternative is the one which appears later in the list of alternatives. For example, §8.9 explicitly prefers declarations to expressions, but the grammar for statement lists expression-statement long before declaration-statement. Having said that, correctly parsing C++ requires a more sophisticated algorithm than just "try to parse a declaration and if that fails, then try to parse as an expression," because there are programs which must be parsed as a declaration with a syntax error (see the example at [stmt.ambig]/3).
No ordering is either implied or necessary.
All seven kinds of literal are distinct. No token that meets the definition of any of them can meet the definition of any other. For example, 42 is an integer-literal and cannot be a floating-point-literal.
How a compiler determines what a token is is an implementation detail that the standard doesn't address, and doesn't need to.
If there were an ambiguity, so that for example the same token could be either an integer-literal or a user-defined-literal, either the language would have to have a rule to disambiguate it, or it would be a bug in the grammar.
UPDATE: There is in fact such an ambiguity. As discussed in comments, 42ULL satisfies the syntax of either an integer-literal or a user-defined-literal. This ambiguity is resolved, not by the ordering of the grammar productions, but by an explicit statement:
If a token matches both user-defined-literal and another literal kind, it is treated as the latter.
The section on syntactic notation in the standard only says this about what it means:
In the syntax notation used in this document, syntactic categories are indicated by italic type, and literal words and characters in constant width type. Alternatives are listed on separate lines except in a few cases where a long set of alternatives is marked by the phrase “one of”. If the text of an alternative is too long to fit on a line, the text is continued on subsequent lines indented from the first one. An optional terminal or non-terminal symbol is indicated by the subscript “opt”, so
{ expressionopt }
indicates an optional expression enclosed in braces.
Note that the statement considers the terms in grammars to be "alternatives", rather than a list or even an ordered list. There is no statement about ordering of the "alternatives" at all.
As such, this strongly suggests that there is no ordering at all.
Indeed, the presence throughout the standard of specific rules to disambiguate cases where multiple terms match also suggests that the alternatives are not written as a prioritized list. If the alternatives were some kind of ordered list, this statement would be redundant:
If a token matches both user-defined-literal and another literal kind, it is treated as the latter.
Apart from obvious answer because, guys designed it that way, why does C/C++ have types, which consist of multiple identifiers, e.g.
long long (int)
short int
signed char
A do have some basic knowledge of parsing and have used flex/bison tools to make few parsers and I think, that this bring much more complexity to parsing type names. And looking on C++ grammar in standard, everything about types really is complicated.
I know, that C++ (also C, I believe) do not specify much about sizes of fundamental data types, thus making types int_8, uint_8, etc. would not work (Altough c++11 gave us fixed width integers).
So, why did developers of standard agreed on multi-word type identifiers, when they could make int, uint and similar.
Speaking in terms of C, why did the developers of the standard agree on multi-word identifiers? It's because that was what the language had at the time of standardisation.
The mandate for the original standard was not to create a new language but to codify existing practice. As per the C89 standard itself:
The Committee evaluated many proposals for additions, deletions, and changes to the base documents during its deliberations. A concerted effort was made to codify existing practice wherever unambiguous and consistent practice could be identified. However, where no consistent practice could be identified, the Committee worked to establish clear rules that were consistent with the overall flavor of the language.
And, from the C99 rationale document:
The original X3J11 charter clearly mandated codifying common existing practice, and the C89 Committee held fast to precedent wherever that was clear and unambiguous. The vast majority of the language defined by C89 was precisely the same as defined in Appendix A of the first edition of The C Programming Language by Brian Kernighan and Dennis Ritchie, and as was implemented in almost all C translators of the time.
Beyond that, each iteration of the standard has valued backward compatibility highly so that code doesn't break. From that same rationale document:
Existing code is important, existing implementations are not. A large body of C code exists of considerable commercial value. Every attempt has been made to ensure that the bulk of this code will be acceptable to any implementation conforming to the Standard. The C89 Committee did not want to force most programmers to modify their C programs just to have them accepted by a conforming translator.
So, while later versions of the standard gave us things like stdint.h with its fixed width integral types, taking away the standard ones like int and long would be a gross violation of that guideline.
In terms of C++, it's almost certainly a holdover from the earliest days of that language where it was put forward as "C plus classes". In fact, the very early cfront C++ compiler was so named because it took C++ source code and turned that into C before giving it to a suitable C compiler (i.e., a front end for C, hence cfront).
This would have allowed the original author Bjarne to minimise the workload in delivering C++ since the bulk of it was already provided by the C compiler itself.
In terms of parsing a language, it's certainly more difficult to have to process unsigned long int x (a) than it is to handle ulong x.
But, given that the compiler already has to handle a large number of optional "modifiers/specifiers" for a variable (e.g., const char * const x), handling a few others is par for the course.
(a) Or int long unsigned x or long unsigned x or any of the other type specifiers that end up becoming the singular unsigned long int type. See here for more details.
Adding new reserved words to a language will break any code which happens to use such words as identifiers unless those words are of a form which is reserved for future expansion (e.g. contain two leading underscores, or start with an underscore and a capital letter, etc.)
By contrast, if some particular sequence of reserved words has no defined meaning in any existing implementation, there can be no existing code which uses that sequence of reserved words, and thus no danger of breaking existing code by attaching a new meaning to it.
In previous versions of the standard (C++03) the order of evaluation of parameters to a function call was unspecified.
Has this been changed in subsequent version of the standard (C++11 or C++14)?
i.e. Can we rely on a specific ordering (left to right) or not.
No this has not changed but there is a very recent proposal to change this: N4228: Refining Expression Evaluation Order for
Idiomatic C++, this was part of the Pre-Urbana mailing that came out this October The introduction says (emphasis mine going forward):
Expression evaluation order is a recurring discussion topic in the
C++ community. In a nutshell, given an expression such as f(a, b, c),
the order in which the sub-expressions f , a , b , c are evaluated
is left unspecified by the standard. If any two of these
sub-expressions happen to modify the same object without intervening
sequence points, the behavior of the program is undefined. For
instance, the expression f(i++, i) where i is an integer variable
leads to undefined behavior
it proposes:
We propose to revise C++ evaluation rules to support decades-old
idiomatic constructs and programming practices. A simple solution
would be to require that every expression has a well-defined
evaluation order. That suggestion has traditionally met resistance for
various reasons. Rather, this proposes suggests a more targeted fix
Postfix expressions are evaluated from left to right. This includes
functions calls and member section expressions.
Assignment expressions are evaluated from right to left. This includes compound assignments.
Operands to shift operators are evaluated from left to right
Update
Herb Sutter recently put out a poll on order of evaluation looking for some feedback from the community on what result we would expect from the following code:
std::vector<int> v = { 0, 0 };
int i = 0;
v[i++] = i++;
std::cout << v[0] << v[1] << endl;
This would seem to indicate the committee is looking at the topic of order of evaluation seriously but as we can see from the discussion this is controversial.
No it is still unspecified in C++11. This is so your compilers can do micro optimizations that would improve the quality of the code, and is varied from compiler to compiler. Try printf with increment operations on different compilers.
functions like int i = foo(3) + bar(0); has undefined behaviors, no function is guaranteed to operate first.
I've posted this on the D newsgroup some months ago, but for some reason, the answer never really convinced me, so I thought I'd ask it here.
The grammar of D is apparently context-free.
The grammar of C++, however, isn't (even without macros). (Please read this carefully!)
Now granted, I know nothing (officially) about compilers, lexers, and parsers. All I know is from what I've learned on the web.
And here is what (I believe) I have understood regarding context, in not-so-technical lingo:
The grammar of a language is context-free if and only if you can always understand the meaning (though not necessarily the exact behavior) of a given piece of its code without needing to "look" anywhere else.
Or, in even less rigor:
The grammar cannot be context-free if I need I can't tell the type of an expression just by looking at it.
So, for example, C++ fails the context-free test because the meaning of confusing<sizeof(x)>::q < 3 > (2) depends on the value of q.
So far, so good.
Now my question is: Can the same thing be said of D?
In D, hashtables can be created through a Value[Key] declaration, for example
int[string] peoplesAges; // Maps names to ages
Static arrays can be defined in a similar syntax:
int[3] ages; // Array of 3 elements
And templates can be used to make them confusing:
template Test1(T...)
{
alias int[T[0]] Test;
}
template Test2(U...)
{
alias int[U] Test2; // LGTM
}
Test1!(5) foo;
Test1!(int) bar;
Test2!(int) baz; // Guess what? It's invalid code.
This means that I cannot tell the meaning of T[0] or U just by looking at it (i.e. it could be a number, it could be a data type, or it could be a tuple of God-knows-what). I can't even tell if the expression is grammatically valid (since int[U] certainly isn't -- you can't have a hashtable with tuples as keys or values).
Any parsing tree that I attempt to make for Test would fail to make any sense (since it would need to know whether the node contains a data type versus a literal or an identifier) unless it delays the result until the value of T is known (making it context-dependent).
Given this, is D actually context-free, or am I misunderstanding the concept?
Why/why not?
Update:
I just thought I'd comment: It's really interesting to see the answers, since:
Some answers claim that C++ and D can't be context-free
Some answers claim that C++ and D are both context-free
Some answers support the claim that C++ is context-sensitive while D isn't
No one has yet claimed that C++ is context-free while D is context-sensitive :-)
I can't tell if I'm learning or getting more confused, but either way, I'm kind of glad I asked this... thanks for taking the time to answer, everyone!
Being context free is first a property of generative grammars. It means that what a non-terminal can generate will not depend on the context in which the non-terminal appears (in non context-free generative grammar, the very notion of "string generated by a given non-terminal" is in general difficult to define). This doesn't prevent the same string of symbols to be generated by two non-terminals (so for the same strings of symbols to appear in two different contexts with a different meaning) and has nothing to do with type checking.
It is common to extend the context-free definition from grammars to language by stating that a language is context-free if there is at least one context free grammar describing it.
In practice, no programming language is context-free because things like "a variable must be declared before it is used" can't be checked by a context-free grammar (they can be checked by some other kinds of grammars). This isn't bad, in practice the rules to be checked are divided in two: those you want to check with the grammar and those you check in a semantic pass (and this division also allows for better error reporting and recovery, so you sometimes want to accept more in the grammar than what would be possible in order to give your users better diagnostics).
What people mean by stating that C++ isn't context-free is that doing this division isn't possible in a convenient way (with convenient including as criteria "follows nearly the official language description" and "my parser generator tool support that kind of division"; allowing the grammar to be ambiguous and the ambiguity to be resolved by the semantic check is an relatively easy way to do the cut for C++ and follow quite will the C++ standard, but it is inconvenient when you are relying on tools which don't allow ambiguous grammars, when you have such tools, it is convenient).
I don't know enough about D to know if there is or not a convenient cut of the language rules in a context-free grammar with semantic checks, but what you show is far from proving the case there isn't.
The property of being context free is a very formal concept; you can find a definition here. Note that it applies to grammars: a language is said to be context free if there is at least one context free grammar that recognizes it. Note that there may be other grammars, possibly non context free, that recognize the same language.
Basically what it means is that the definition of a language element cannot change according to which elements surround it. By language elements I mean concepts like expressions and identifiers and not specific instances of these concepts inside programs, like a + b or count.
Let's try and build a concrete example. Consider this simple COBOL statement:
01 my-field PICTURE 9.9 VALUE 9.9.
Here I'm defining a field, i.e. a variable, which is dimensioned to hold one integral digit, the decimal point, and one decimal digit, with initial value 9.9 . A very incomplete grammar for this could be:
field-declaration ::= level-number identifier 'PICTURE' expression 'VALUE' expression '.'
expression ::= digit+ ( '.' digit+ )
Unfortunately the valid expressions that can follow PICTURE are not the same valid expressions that can follow VALUE. I could rewrite the second production in my grammar as follows:
'PICTURE' expression ::= digit+ ( '.' digit+ ) | 'A'+ | 'X'+
'VALUE' expression ::= digit+ ( '.' digit+ )
This would make my grammar context-sensitive, because expression would be a different thing according to whether it was found after 'PICTURE' or after 'VALUE'. However, as it has been pointed out, this doesn't say anything about the underlying language. A better alternative would be:
field-declaration ::= level-number identifier 'PICTURE' format 'VALUE' expression '.'
format ::= digit+ ( '.' digit+ ) | 'A'+ | 'X'+
expression ::= digit+ ( '.' digit+ )
which is context-free.
As you can see this is very different from your understanding. Consider:
a = b + c;
There is very little you can say about this statement without looking up the declarations of a,b and c, in any of the languages for which this is a valid statement, however this by itself doesn't imply that any of those languages is not context free. Probably what is confusing you is the fact that context freedom is different from ambiguity. This a simplified version of your C++ example:
a < b > (c)
This is ambiguous in that by looking at it alone you cannot tell whether this is a function template call or a boolean expression. The previous example on the other hand is not ambiguous; From the point of view of grammars it can only be interpreted as:
identifier assignment identifier binary-operator identifier semi-colon
In some cases you can resolve ambiguities by introducing context sensitivity at the grammar level. I don't think this is the case with the ambiguous example above: in this case you cannot eliminate the ambiguity without knowing whether a is a template or not. Note that when such information is not available, for instance when it depends on a specific template specialization, the language provides ways to resolve ambiguities: that is why you sometimes have to use typename to refer to certain types within templates or to use template when you call member function templates.
There are already a lot of good answers, but since you are uninformed about grammars, parsers and compilers etc, let me demonstrate this by an example.
First, the concept of grammars are quite intuitive. Imagine a set of rules:
S -> a T
T -> b G t
T -> Y d
b G -> a Y b
Y -> c
Y -> lambda (nothing)
And imagine you start with S. The capital letters are non-terminals and the small letters are terminals. This means that if you get a sentence of all terminals, you can say the grammar generated that sentence as a "word" in the language. Imagine such substitutions with the above grammar (The phrase between *phrase* is the one being replaced):
*S* -> a *T* -> a *b G* t -> a a *Y* b t -> a a b t
So, I could create aabt with this grammar.
Ok, back to main line.
Let us assume a simple language. You have numbers, two types (int and string) and variables. You can do multiplication on integers and addition on strings but not the other way around.
First thing you need, is a lexer. That is usually a regular grammar (or equal to it, a DFA, or equally a regular expression) that matches the program tokens. It is common to express them in regular expressions. In our example:
(I'm making these syntaxes up)
number: [1-9][0-9]* // One digit from 1 to 9, followed by any number
// of digits from 0-9
variable: [a-zA-Z_][a-zA-Z_0-9]* // You get the idea. First a-z or A-Z or _
// then as many a-z or A-Z or _ or 0-9
// this is similar to C
int: 'i' 'n' 't'
string: 's' 't' 'r' 'i' 'n' 'g'
equal: '='
plus: '+'
multiply: '*'
whitespace: (' ' or '\n' or '\t' or '\r')* // to ignore this type of token
So, now you got a regular grammar, tokenizing your input, but it understands nothing of the structure.
Then you need a parser. The parser, is usually a context free grammar. A context free grammar means, in the grammar you only have single nonterminals on the left side of grammar rules. In the example in the beginning of this answer, the rule
b G -> a Y b
makes the grammar context-sensitive because on the left you have b G and not just G. What does this mean?
Well, when you write a grammar, each of the nonterminals have a meaning. Let's write a context-free grammar for our example (| means or. As if writing many rules in the same line):
program -> statement program | lambda
statement -> declaration | executable
declaration -> int variable | string variable
executable -> variable equal expression
expression -> integer_type | string_type
integer_type -> variable multiply variable |
variable multiply number |
number multiply variable |
number multiply number
string_type -> variable plus variable
Now this grammar can accept this code:
x = 1*y
int x
string y
z = x+y
Grammatically, this code is correct. So, let's get back to what context-free means. As you can see in the example above, when you expand executable, you generate one statement of the form variable = operand operator operand without any consideration which part of code you are at. Whether the very beginning or middle, whether the variables are defined or not, or whether the types match, you don't know and you don't care.
Next, you need semantics. This is were context-sensitive grammars come into play. First, let me tell you that in reality, no one actually writes a context sensitive grammar (because parsing it is too difficult), but rather bit pieces of code that the parser calls when parsing the input (called action routines. Although this is not the only way). Formally, however, you can define all you need. For example, to make sure you define a variable before using it, instead of this
executable -> variable equal expression
you have to have something like:
declaration some_code executable -> declaration some_code variable equal expression
more complex though, to make sure the variable in declaration matches the one being calculated.
Anyway, I just wanted to give you the idea. So, all these things are context-sensitive:
Type checking
Number of arguments to function
default value to function
if member exists in obj in code: obj.member
Almost anything that's not like: missing ; or }
I hope you got an idea what are the differences (If you didn't, I'd be more than happy to explain).
So in summary:
Lexer uses a regular grammar to tokenize input
Parser uses a context-free grammar to make sure the program is in correct structure
Semantic analyzer uses a context-sensitive grammar to do type-checking, parameter matching etc etc
It is not necessarily always like that though. This just shows you how each level needs to get more powerful to be able to do more stuff. However, each of the mentioned compiler levels could in fact be more powerful.
For example, one language that I don't remember, used array subscription and function call both with parentheses and therefore it required the parser to go look up the type (context-sensitive related stuff) of the variable and determine which rule (function_call or array_substitution) to take.
If you design a language with lexer that has regular expressions that overlap, then you would need to also look up the context to determine which type of token you are matching.
To get to your question! With the example you mentioned, it is clear that the c++ grammar is not context-free. The language D, I have absolutely no idea, but you should be able to reason about it now. Think of it this way: In a context free grammar, a nonterminal can expand without taking into consideration anything, BUT the structure of the language. Similar to what you said, it expands, without "looking" anywhere else.
A familiar example would be natural languages. For example in English, you say:
sentence -> subject verb object clause
clause -> .... | lambda
Well, sentence and clause are nonterminals here. With this grammar you can create these sentences:
I go there because I want to
or
I jump you that I is air
As you can see, the second one has the correct structure, but is meaningless. As long as a context free grammar is concerned, the meaning doesn't matter. It just expands verb to whatever verb without "looking" at the rest of the sentence.
So if you think D has to at some point check how something was defined elsewhere, just to say the program is structurally correct, then its grammar is not context-free. If you isolate any part of the code and it still can say that it is structurally correct, then it is context-free.
There is a construct in D's lexer:
string ::= q" Delim1 Chars newline Delim2 "
where Delim1 and Delim2 are matching identifiers, and Chars does not contain newline Delim2.
This construct is context sensitive, therefore D's lexer grammar is context sensitive.
It's been a few years since I've worked with D's grammar much, so I can't remember all the trouble spots off the top of my head, or even if any of them make D's parser grammar context sensitive, but I believe they do not. From recall, I would say D's grammar is context free, not LL(k) for any k, and it has an obnoxious amount of ambiguity.
The grammar cannot be context-free if I need I can't tell the type of
an expression just by looking at it.
No, that's flat out wrong. The grammar cannot be context-free if you can't tell if it is an expression just by looking at it and the parser's current state (am I in a function, in a namespace, etc).
The type of an expression, however, is a semantic meaning, not syntactic, and the parser and the grammar do not give a penny about types or semantic validity or whether or not you can have tuples as values or keys in hashmaps, or if you defined that identifier before using it.
The grammar doesn't care what it means, or if that makes sense. It only cares about what it is.
To answer the question of if a programming language is context free you must first decide where to draw the line between syntax and semantics. As an extreme example, it is illegal in C for a program to use the value of some kinds of integers after they have been allowed to overflow. Clearly this can't be checked at compile time, let alone parse time:
void Fn() {
int i = INT_MAX;
FnThatMightNotReturn(); // halting problem?
i++;
if(Test(i)) printf("Weeee!\n");
}
As a less extreme example that others have pointed out, deceleration before use rules can't be enforced in a context free syntax so if you wish to keep your syntax pass context free, then that must be deferred to the next pass.
As a practical definition, I would start with the question of: Can you correctly and unambiguously determine the parse tree of all correct programs using a context free grammar and, for all incorrect programs (that the language requires be rejected), either reject them as syntactically invalid or produce a parse tree that the later passes can identify as invalid and reject?
Given that the most correct spec for the D syntax is a parser (IIRC an LL parser) I strongly suspect that it is in fact context free by the definition I suggested.
Note: the above says nothing about what grammar the language documentation or a given parser uses, only if a context free grammar exists. Also, the only full documentation on the D language is the source code of the compiler DMD.
These answers are making my head hurt.
First of all, the complications with low level languages and figuring out whether they are context-free or not, is that the language you write in is often processed in many steps.
In C++ (order may be off, but that shouldn't invalidate my point):
it has to process macros and other preprocessor stuffs
it has to interpret templates
it finally interprets your code.
Because the first step can change the context of the second step and the second step can change the context of the third step, the language YOU write in (including all of these steps) is context sensitive.
The reason people will try and defend a language (stating it is context-free) is, because the only exceptions that adds context are the traceable preprocessor statements and template calls. You only have to follow two restricted exceptions to the rules to pretend the language is context-free.
Most languages are context-sensitive overall, but most languages only have these minor exceptions to being context-free.