Why are if expressions and if statements in ada, also for case - if-statement

Taken from Introduction to Ada—If expressions:
Ada's if expressions are similar to if statements. However, there are a few differences that stem from the fact that it is an expression:
All branches' expressions must be of the same type
It must be surrounded by parentheses if the surrounding expression does not already contain them
An else branch is mandatory unless the expression following then has a Boolean value. In that case an else branch is optional and, if not present, defaults to else True.
I do not understand the need to have two different ways of constructing code with the if keyword. What is the reasoning behind this?
Also there case expressions and case statements. Why is this?

I think this is best answered by quoting the Ada 2012 Rationale Chapter 3.1:
One of the key areas identified by the WG9 guidance document [1] as
needing attention was improving the ability to write and enforce
contracts. These were discussed in detail in the previous chapter.
When defining the new aspects for preconditions, postconditions, type
invariants and subtype predicates it became clear that without more
flexible forms of expressions, many functions would need to be
introduced because in all cases the aspect was given by an expression.
However, declaring a function and thus giving the detail of the
condition, invariant or predicate in the function body makes the
detail of the contract rather remote for the human reader. Information
hiding is usually a good thing but in this case, it just introduces
obscurity. Four forms are introduced, namely, if expressions, case
expressions, quantified expressions and expression functions. Together
they give Ada some of the flexible feel of a functional language.
In addition, if statements and case statements often assigns different values to the same variable in all branches, and nothing else:
if Foo > 10 then
Bar := 1;
else
Bar := 2;
end if;
In this case, an if expression may increase readability and more clearly state in the code what's going on:
Bar := (if Foo > 10 then 1 else 2);
We can now see that there's no longer a need for the maintainer of the code to read a whole if statement in order to see that only a single variable is updated.
Same goes for case expressions, which can also reduce the need for nesting if expressions.
Also, I can throw the question back to you: Why does C-based languages have the ternary operator ?: in addition to if statements?

Egilhh already covered the main reason, but there are sometimes other useful reasons to implement expressions. Sometimes you make packages where only one or two methods are needed and they are the only reason to make a package body. You can use expressions to make expression functions which allow you to define the operations in the spec file.
Additionally, if you ever end up with some complex variant record combinations, sometimes expressions can be used to setup default values for them in instances where you normally would not be able to as cleanly. Consider the following example:
with Ada.Text_IO; use Ada.Text_IO;
procedure Hello is
type Binary_Type is (On, Off);
type Inner(Binary : Binary_Type := Off) is record
case Binary is
when On =>
Value : Integer := 0;
when Off =>
null;
end case;
end record;
type Outer(Some_Flag : Boolean) is record
Other : Integer := 32;
Thing : Inner := (if Some_Flag then
(Binary => Off)
else
(Binary => On, Value => 23));
end record;
begin
Put_Line("Hello, world!");
end Hello;
I had something come up with a more complex setup that was meant to map to a complex messaging interface at the hardware level. It's nice to have defaults whenever possible. Now I cold have used a case inside of Outer, but then I would have had to come up with two separately named versions of the message field for each case, which really isn't optimal when you want your code to map to an ICD. Again, I could have used a function to initialize it as well, but as noted in the other posters answer, that isn't always a good way to go.

Another place that outlines the motivation for adding conditional expressions to Ada can be found in the ARG document, AI05-0147-1, which explains the motivation and gives some examples of use.
An example of a place where I find them quite useful is in processing command line parameters, for the case when a default value is used if the parameter is not specified on the command line. Generally, you'd want to declare such values as constants in one's program. Conditional expressions makes it easier to do that.
with Ada.Command_Line; use Ada;
procedure Main
is
N : constant Positive :=
(if Command_Line.Argument_Count = 0 then 2_000_000
else Positive'Value (Command_Line.Argument (1)));
...
Otherwise, without conditional expressions, in order to achieve the same effect you'd need to declare a function, which I find to be more difficult to read;
with Ada.Command_Line; use Ada;
procedure Main
is
function Get_N return Positive is
begin
if Command_Line.Argument_Count = 0 then
return 2_000_000;
else
return Positive'Value (Command_Line.Argument (1));
end if;
end Get_N;
N : constant Positive := Get_N;
...

The if expression in Ada feels and works a lot like a statement using the ternary operator in the C-based languages. I took the liberty of copying some code from learn.adacore.com that introduces the if expression:
with Ada.Text_IO; use Ada.Text_IO;
with Ada.Integer_Text_IO; use Ada.Integer_Text_IO;
procedure Check_Positive is
N : Integer;
begin
Put ("Enter an integer value: ");
Get (N);
Put (N,0);
declare
S : constant String :=
(if N > 0 then " is a positive number"
else " is not a positive number");
begin
Put_Line (S);
end;
end Check_Positive;
And I translated it to a C-based language - in this case, Java. I believe the main point to notice is that both languages, although syntactically different, are effectively doing the same thing: testing a condition and assigning one of two values to a variable all within one statement. Although I realize this is an oversimplification for most here on stackoverlfow. My goal is to help the beginner to understand the basic concept with introductory examples. Cheers.
import java.util.Scanner;
public class IfExpression {
public static void main(String[] args) {
Scanner in = new Scanner(System.in);
System.out.print("Enter an integer value: ");
var N = in.nextInt();
System.out.print(N);
var S = N > 0 ? " is a positive number" : " is not a positive number";
System.out.println(S);
in.close();
}
}

Related

Generating branch instructions from an AST for a conditional expression

I am trying to write a compiler for a domain-specific language, targeting a stack-machine based VM that is NOT a JVM.
I have already generated a parser for my language, and can readily produce an AST which I can easily walk. I also have had no problem converting many of the statements of my language into the appropriate instructions for this VM, but am facing an obstacle when it comes to the matter of handling the generation of appropriate branching instructions when complex conditionals are encountered, especially when they are combined with (possibly nested) 'and'-like or 'or' like operations which should use short-circuiting branching as applicable.
I am not asking anyone to write this for me. I know that I have not begun to describe my problem in sufficient detail for that. What I am asking for is pointers to useful material that can get me past this hurdle I am facing. As I said, I am already past the point of converting about 90% of the statements in my language into applicable instructions, but it is the handling of conditionals and generating the appropriate flow control instructions that has got me stumped. Much of the info that I have been able to find so far on generating code from an AST only seems to deal with the generation of code corresponding to simple imperative-like statements, but the handing of conditionals and flow control appears to be much more scarce.
Other than the short-circuiting/lazy-evaluation mechanism for 'and' and 'or' like constructs that I have described, I am not concerned with handling any other optimizations.
Every conditional control flow can be modelled as a flow chart (or flow graph) in which the two branches of the conditional have different targets. Given that boolean operators short-circuit, they are control flow elements rather than simple expressions, and they need to be modelled as such.
One way to think about this is to rephrase boolean operators as instances of the ternary conditional operator. So, for example, A and B becomes A ? B : false and A or B becomes A ? true : B [Note 1]. Note that every control flow diagram has precisely two output points.
To combine boolean expressions, just substitute into the diagram. For example, here's A AND (B OR C)
You implement NOT by simply swapping the meaning of the two out-flows.
If the eventual use of the boolean expression is some kind of conditional, such as an if statement or a conditional loop, you can use the control flow as is. If the boolean expression is to be saved into a variable or otherwise used as a value, you need to fill in the two outflows with code to create the relevant constant, usually a true or false boolean constant, or (in C-like languages) a 1 or 0.
Notes:
Another way to write this equivalence is A and B ⇒ A ? B : A; A or B ⇒ A ? A : B, but that is less useful for a control flow view, and also clouds the fact that the intent is to only evaluate each expression once. This form (modified to reuse the initial computation of A) is commonly used in languages with multiple "falsey" values (like Python).

How can I convert if to switch? [closed]

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
Is there a way to convert C++ ifs to switches? Both nested or non-nested ifs:
if ( a == 1 ) { b = 1; } /*else*/
if ( a == 2 ) { b = 7; } /*else*/
if ( a == 3 ) { b = 3; }
It should, of course, detect if the conversion is valid or not.
This isn't a micro-optimization, it's for clarity purposes.
In general, you ought to be able to do this with a program transformation tool that can process C++.
In specific, our DMS Software Reengineering Toolkit can to do this, using source-to-source rewrites. I think these are close to doing the job:
default domain Cpp;
rule fold_if_to_switch_initial(s: stmt_sequence, e: expression,
k1: integer_literal, k2: integer_literal,
a1: block, a2: block):
stmt_sequence->stmt_sequence
= "\s
if (\e==\k1) \a1
if (\e==\k2) \a2 "
-> "\s
switch (\e) {
case \k1: \a1
break;
case \k2: \a2
break;
}" if no_side_effects(e) /\ no_impact_on(a1,e);
rule fold_if_to_switch_expand(s: stmt_sequence, e: expression, c: casebody,
k: integer_literal, a:action)
stmt_sequence->stmt_sequence
= "\s
switch (\e) { \c }
if (\e==\k) \a "
-> "\s
switch (\e) {
\c
case \k: \a
break;
}" if no_side_effects(e) /\ no_impact_on(c,e);
One has to handle the cases where the if controls a statement rather than a block, cheesily accomplished by this:
rule blockize_if_statements:(e: expression, s: statement):
statement->statement
= "if (\e) \s" -> "if (\e) { \s } ";
DMS works by using a full language parser (yes, it really has a full C++ parser option with preprocessor)
to process the source code and the rewrite rules. It isn't fooled by whitespace or comments, because it is operating on the parse trees. The text inside "..." is C++ text
(that's what the 'default domain Cpp' declaration says) with pattern variables \x; text outside the "..." is DMS's rule meta syntax, not C++. Patterns match the trees and bind pattern variables to subtrees; rule right hand sides are instantiated use pattern variable bindings. After applying the transformations,
it regenerates the (revised) source code from the revised trees.
OP insists, "It should, of course, detect if the conversion is valid or not.". This is the point of the conditions at the end of the rules, e.g., 'if no_side_effect ... no_impact', which are intended to check that evaluating the expression doesn't change something, and that the actions don't affect the value of the expression.
Implementing these semantic checks is difficult, as one must take into account C++;s incredibly complex semantics. A tool doing this must know at least as much as the compiler (e.g., names, types, properties of declarations, reads and writes), and then has to be able to reason about the consequences. DMS at present is able to implement only some of these (we are working on full control and data flow analysis for C++11 now). I would not expect other refactoring tools to do this part well, either, because of the complex semantics; I think it unlikely that situation will improve because of the effort it takes to reason about C++.
In OP's case (and this occurs fairly often when using a tool like DMS), one can assert that in fact these semantic predicates don't get violated (then you can replace them with "true" and thus avoid implementing them) or that you only care about special cases (e.g., the action is an assignment to a variable other than on in the expression), at which point the check becomes much simpler, or you can simulate the check by narrowing the acceptable syntax:
rule fold_if_to_switch_expand(s: stmt_sequence, v: identifier, c: casebody,
k: integer_literal, v2: identifier, k2: integer_literal)
stmt_sequence->stmt_sequence
= "\s
switch (\v) { \c }
if (\v==\k) \v2=\k2; "
-> "\s
switch (\v) {
\c
case \k: \v2=\k2;
break;
}" if no_side_effects(c);
Having said all this, if OP has only a few hundred known places that need this improvement, he'll probably get it done faster by biting the bullet and using his editor. If he doesn't know the locations, and/or there are many more instances, DMS would get the job done faster and likely with many fewer mistakes.
(We've done essentially this same refactoring for IBM Enterprise COBOL using DMS,
without the deep semantic checks).
There are other program transformation tools with the right kind of machinery (pattern matching, rewriting) that can do this in theory. None of them to my knowledge make any effective attempt to handle C++.
I'd use a text editor.
Step 1:
put MARK before your block of code, with possible leading tabs
Step 2:
Put END at the end of your block of code, with possible leading tabs
Step 3:
Load your file in vi of some kind. Run something like this:
/^^T*MARK/
:.,/^^T*ENDMARK/ s/^\(^T*\)if/\1IIFF/g
:%s/IIFF *([^)]*) *{ *\([^}]*\)};? *$/TEST{\1}TSET {\2} break;/
:%s/TEST{\([^=}]*\)==\([^}]*\)}TSET/COND{\1}case \2:/g
?^^T*MARK?
/COND{/
:. s/COND{\([^}]*\)}/switch(\1) {^M^T/
:%s/COND{\([^}]*\)}/^T/g
:%s/^\(^T*\)ENDMARK/\1} \/\/switch/g
:%s/^^TMARK//g
where ^T is tab and ^M is a return character (which you may not be able to type directly, depending on what version of vi/vim you are using)
Code is written off the top of my head, and not tested. But I've done things like this before. Also, files containing TEST{whatever}TSET and COND{whatever} and MARK and ENDMARK and IIFF type constructs will be horribly mangled.
This does not check validity, it just converts things in exactly your format into a switch statement.
Limited validity checking could be done with some fancy-ass (for vi) making sure that the left hand side of the equality expression is the same for all lines. It also doesn't handle else, but that is easily rectified.
Then, before checking in, you do a visual sweep of the modified code using a good comparison tool.

C++ refactoring: conditional expansion and block elimination

I'm in the process of refactoring a very large amount of code, mostly C++, to remove a number of temporary configuration checks which have become permanantly set to given values. So for example, I would have the following code:
#include <value1.h>
#include <value2.h>
#include <value3.h>
...
if ( value1() )
{
// do something
}
bool b = value2();
if ( b && anotherCondition )
{
// do more stuff
}
if ( value3() < 10 )
{
// more stuff again
}
where the calls to value return either a bool or an int. Since I know the values that these calls always return, I've done some regex substitution to expand the calls to their normal values:
// where:
// value1() == true
// value2() == false
// value3() == 4
// TODO: Remove expanded config (value1)
if ( true )
{
// do something
}
// TODO: Remove expanded config (value2)
bool b = false;
if ( b && anotherCondition )
{
// do more stuff
}
// TODO: Remove expanded config (value3)
if ( 4 < 10 )
{
// more stuff again
}
Note that although the values are fixed, they are not set at compile time but are read from shared memory so the compiler is not currently optimising anything away behind the scenes.
Although the resultant code looks a bit goofy, this regex approach achieves a lot of what I want since it's simple to apply and removes dependence on the calls, while not changing the behaviour of the code and it's also likely that the compiler may then optimise a lot of it out knowing that a block can never be called or a check will always return true. It also makes it reasonably easy (especially when diffing against version control) to see what has changed and take the final step of cleaning it up so the code above code eventually looks as follows:
// do something
// DONT do more stuff (b being false always prevented this)
// more stuff again
The trouble is that I have hundreds (possibly thousands) of changes to make to get from the second, correct but goofy, stage to get to the final cleaned code.
I wondered if anyone knew of a refactoring tool which might handle this or of any techniques I could apply. The main problem is that the C++ syntax makes full expansion or elimination quite difficult to achieve and there are many permutations to the code above. I feel I almost need a compiler to deal with the variation of syntax that I would need to cover.
I know there have been similar questions but I can't find any requirement quite like this and also wondered if any tools or procedures had emerged since they were asked?
It sounds like you have what I call "zombie code"... dead in practice, but still live as far as the compiler is concerned. This is a pretty common issue with most systems of organized runtime configuration variables: eventually some configuration variables arrive at a permanent fixed state, yet are reevaluated at runtime repeatedly.
The cure isn't regex, as you have noted, because regex doesn't parse C++ code reliably.
What you need is a program transformation system. This is a tool that really parses source code, and can apply a set of code-to-code rewriting rules to the parse tree, and can regenerate source text from the changed tree.
I understand that Clang has some capability here; it can parse C++ and build a tree, but it does not have source-to-source transformation capability. You can simulate that capability by writing AST-to-AST transformations but that's a lot more inconvenient IMHO. I believe it can regenerate C++ code but I don't know if it will preserve comments or preprocessor directives.
Our DMS Software Reengineering Toolkit with its C++(11) front end can (and has been used to) carry out massive transformations on C++ source code, and has source-to-source transformations. AFAIK, it is the only production tool that can do this. What you need is a set of transformations that represent your knowledge of the final state of the configuration variables of interest, and some straightforward code simplification rules. The following DMS rules are close to what you likely want:
rule fix_value1():expression->expression
"value1()" -> "true";
rule fix_value2():expression->expression
"value2()" -> "false";
rule fix_value3():expression->expression
"value3()" -> "4";
rule simplify_boolean_and_true(r:relation):condition->condition
"r && true" -> "r".
rule simplify_boolean_or_ture(r:relation):condition->condition
"r || true" -> "true".
rule simplify_boolean_and_false(r:relation):condition->condition
"r && false" -> "false".
...
rule simplify_boolean_not_true(r:relation):condition->condition
"!true" -> "false".
...
rule simplify_if_then_false(s:statement): statement->statement
" if (false) \s" -> ";";
rule simplify_if_then_true(s:statement): statement->statement
" if (true) \s" -> "\s";
rule simplify_if_then_else_false(s1:statement, s2:statement): statement->statement
" if (false) \s1 else \s2" -> "\s2";
rule simplify_if_then_else_true(s1:statement, s2: statement): statement->statement
" if (true) \s1 else \s2" -> "\s2";
You also need rules to simplify ("fold") constant expressions involving arithmetic, and rules to handle switch on expressions that are now constant. To see what DMS rules look like for integer constant folding see Algebra as a DMS domain.
Unlike regexes, DMS rewrite rules cannot "mismatch" code; they represent the corresponding ASTs and it is that ASTs that are matched. Because it is AST matching, they have no problems with whitespace, line breaks or comments. You might think they could have trouble with order of operands ('what if "false && x" is encountered?'); they do not, as the grammar rules for && and || are marked in the DMS C++ parser as associative and commutative and the matching process automatically takes that into account.
What these rules cannot do by themselves is value (in your case, constant) propagation across assignments. For this you need flow analysis so that you can trace such assignments ("reaching definitions"). Obviously, if you don't have such assignments or very few, you can hand patch those. If you do, you'll need the flow analysis; alas, DMS's C++ front isn't quite there but we are working on it; we have control flow analysis in place. (DMS's C front end has full flow analysis).
(EDIT February 2015: Now does full C++14; flow analysis within functions/methods).
We actually applied this technique to 1.5M SLOC application of mixed C and C++ code from IBM Tivoli almost a decade ago with excellent success; we didn't need the flow analysis :-}
You say:
Note that although the values are reasonably fixed, they are not set at compile time but are read from shared memory so the compiler is not currently optimising anything away behind the scenes.
Constant-folding the values by hand doesn't make a lot of sense unless they are completely fixed. If your compiler provides constexpr you could use that, or you could substitute in preprocessor macros like this:
#define value1() true
#define value2() false
#define value3() 4
The optimizer would take care of you from there. Without seeing examples of exactly what's in your <valueX.h> headers or knowing how your process of getting these values from shared memory is working, I'll just throw out that it could be useful to rename the existing valueX() functions and do a runtime check in case they change again in the future:
// call this at startup to make sure our agreed on values haven't changed
void check_values() {
assert(value1() == get_value1_from_shared_memory());
assert(value2() == get_value2_from_shared_memory());
assert(value3() == get_value3_from_shared_memory());
}

Is D's grammar really context-free?

I've posted this on the D newsgroup some months ago, but for some reason, the answer never really convinced me, so I thought I'd ask it here.
The grammar of D is apparently context-free.
The grammar of C++, however, isn't (even without macros). (Please read this carefully!)
Now granted, I know nothing (officially) about compilers, lexers, and parsers. All I know is from what I've learned on the web.
And here is what (I believe) I have understood regarding context, in not-so-technical lingo:
The grammar of a language is context-free if and only if you can always understand the meaning (though not necessarily the exact behavior) of a given piece of its code without needing to "look" anywhere else.
Or, in even less rigor:
The grammar cannot be context-free if I need I can't tell the type of an expression just by looking at it.
So, for example, C++ fails the context-free test because the meaning of confusing<sizeof(x)>::q < 3 > (2) depends on the value of q.
So far, so good.
Now my question is: Can the same thing be said of D?
In D, hashtables can be created through a Value[Key] declaration, for example
int[string] peoplesAges; // Maps names to ages
Static arrays can be defined in a similar syntax:
int[3] ages; // Array of 3 elements
And templates can be used to make them confusing:
template Test1(T...)
{
alias int[T[0]] Test;
}
template Test2(U...)
{
alias int[U] Test2; // LGTM
}
Test1!(5) foo;
Test1!(int) bar;
Test2!(int) baz; // Guess what? It's invalid code.
This means that I cannot tell the meaning of T[0] or U just by looking at it (i.e. it could be a number, it could be a data type, or it could be a tuple of God-knows-what). I can't even tell if the expression is grammatically valid (since int[U] certainly isn't -- you can't have a hashtable with tuples as keys or values).
Any parsing tree that I attempt to make for Test would fail to make any sense (since it would need to know whether the node contains a data type versus a literal or an identifier) unless it delays the result until the value of T is known (making it context-dependent).
Given this, is D actually context-free, or am I misunderstanding the concept?
Why/why not?
Update:
I just thought I'd comment: It's really interesting to see the answers, since:
Some answers claim that C++ and D can't be context-free
Some answers claim that C++ and D are both context-free
Some answers support the claim that C++ is context-sensitive while D isn't
No one has yet claimed that C++ is context-free while D is context-sensitive :-)
I can't tell if I'm learning or getting more confused, but either way, I'm kind of glad I asked this... thanks for taking the time to answer, everyone!
Being context free is first a property of generative grammars. It means that what a non-terminal can generate will not depend on the context in which the non-terminal appears (in non context-free generative grammar, the very notion of "string generated by a given non-terminal" is in general difficult to define). This doesn't prevent the same string of symbols to be generated by two non-terminals (so for the same strings of symbols to appear in two different contexts with a different meaning) and has nothing to do with type checking.
It is common to extend the context-free definition from grammars to language by stating that a language is context-free if there is at least one context free grammar describing it.
In practice, no programming language is context-free because things like "a variable must be declared before it is used" can't be checked by a context-free grammar (they can be checked by some other kinds of grammars). This isn't bad, in practice the rules to be checked are divided in two: those you want to check with the grammar and those you check in a semantic pass (and this division also allows for better error reporting and recovery, so you sometimes want to accept more in the grammar than what would be possible in order to give your users better diagnostics).
What people mean by stating that C++ isn't context-free is that doing this division isn't possible in a convenient way (with convenient including as criteria "follows nearly the official language description" and "my parser generator tool support that kind of division"; allowing the grammar to be ambiguous and the ambiguity to be resolved by the semantic check is an relatively easy way to do the cut for C++ and follow quite will the C++ standard, but it is inconvenient when you are relying on tools which don't allow ambiguous grammars, when you have such tools, it is convenient).
I don't know enough about D to know if there is or not a convenient cut of the language rules in a context-free grammar with semantic checks, but what you show is far from proving the case there isn't.
The property of being context free is a very formal concept; you can find a definition here. Note that it applies to grammars: a language is said to be context free if there is at least one context free grammar that recognizes it. Note that there may be other grammars, possibly non context free, that recognize the same language.
Basically what it means is that the definition of a language element cannot change according to which elements surround it. By language elements I mean concepts like expressions and identifiers and not specific instances of these concepts inside programs, like a + b or count.
Let's try and build a concrete example. Consider this simple COBOL statement:
01 my-field PICTURE 9.9 VALUE 9.9.
Here I'm defining a field, i.e. a variable, which is dimensioned to hold one integral digit, the decimal point, and one decimal digit, with initial value 9.9 . A very incomplete grammar for this could be:
field-declaration ::= level-number identifier 'PICTURE' expression 'VALUE' expression '.'
expression ::= digit+ ( '.' digit+ )
Unfortunately the valid expressions that can follow PICTURE are not the same valid expressions that can follow VALUE. I could rewrite the second production in my grammar as follows:
'PICTURE' expression ::= digit+ ( '.' digit+ ) | 'A'+ | 'X'+
'VALUE' expression ::= digit+ ( '.' digit+ )
This would make my grammar context-sensitive, because expression would be a different thing according to whether it was found after 'PICTURE' or after 'VALUE'. However, as it has been pointed out, this doesn't say anything about the underlying language. A better alternative would be:
field-declaration ::= level-number identifier 'PICTURE' format 'VALUE' expression '.'
format ::= digit+ ( '.' digit+ ) | 'A'+ | 'X'+
expression ::= digit+ ( '.' digit+ )
which is context-free.
As you can see this is very different from your understanding. Consider:
a = b + c;
There is very little you can say about this statement without looking up the declarations of a,b and c, in any of the languages for which this is a valid statement, however this by itself doesn't imply that any of those languages is not context free. Probably what is confusing you is the fact that context freedom is different from ambiguity. This a simplified version of your C++ example:
a < b > (c)
This is ambiguous in that by looking at it alone you cannot tell whether this is a function template call or a boolean expression. The previous example on the other hand is not ambiguous; From the point of view of grammars it can only be interpreted as:
identifier assignment identifier binary-operator identifier semi-colon
In some cases you can resolve ambiguities by introducing context sensitivity at the grammar level. I don't think this is the case with the ambiguous example above: in this case you cannot eliminate the ambiguity without knowing whether a is a template or not. Note that when such information is not available, for instance when it depends on a specific template specialization, the language provides ways to resolve ambiguities: that is why you sometimes have to use typename to refer to certain types within templates or to use template when you call member function templates.
There are already a lot of good answers, but since you are uninformed about grammars, parsers and compilers etc, let me demonstrate this by an example.
First, the concept of grammars are quite intuitive. Imagine a set of rules:
S -> a T
T -> b G t
T -> Y d
b G -> a Y b
Y -> c
Y -> lambda (nothing)
And imagine you start with S. The capital letters are non-terminals and the small letters are terminals. This means that if you get a sentence of all terminals, you can say the grammar generated that sentence as a "word" in the language. Imagine such substitutions with the above grammar (The phrase between *phrase* is the one being replaced):
*S* -> a *T* -> a *b G* t -> a a *Y* b t -> a a b t
So, I could create aabt with this grammar.
Ok, back to main line.
Let us assume a simple language. You have numbers, two types (int and string) and variables. You can do multiplication on integers and addition on strings but not the other way around.
First thing you need, is a lexer. That is usually a regular grammar (or equal to it, a DFA, or equally a regular expression) that matches the program tokens. It is common to express them in regular expressions. In our example:
(I'm making these syntaxes up)
number: [1-9][0-9]* // One digit from 1 to 9, followed by any number
// of digits from 0-9
variable: [a-zA-Z_][a-zA-Z_0-9]* // You get the idea. First a-z or A-Z or _
// then as many a-z or A-Z or _ or 0-9
// this is similar to C
int: 'i' 'n' 't'
string: 's' 't' 'r' 'i' 'n' 'g'
equal: '='
plus: '+'
multiply: '*'
whitespace: (' ' or '\n' or '\t' or '\r')* // to ignore this type of token
So, now you got a regular grammar, tokenizing your input, but it understands nothing of the structure.
Then you need a parser. The parser, is usually a context free grammar. A context free grammar means, in the grammar you only have single nonterminals on the left side of grammar rules. In the example in the beginning of this answer, the rule
b G -> a Y b
makes the grammar context-sensitive because on the left you have b G and not just G. What does this mean?
Well, when you write a grammar, each of the nonterminals have a meaning. Let's write a context-free grammar for our example (| means or. As if writing many rules in the same line):
program -> statement program | lambda
statement -> declaration | executable
declaration -> int variable | string variable
executable -> variable equal expression
expression -> integer_type | string_type
integer_type -> variable multiply variable |
variable multiply number |
number multiply variable |
number multiply number
string_type -> variable plus variable
Now this grammar can accept this code:
x = 1*y
int x
string y
z = x+y
Grammatically, this code is correct. So, let's get back to what context-free means. As you can see in the example above, when you expand executable, you generate one statement of the form variable = operand operator operand without any consideration which part of code you are at. Whether the very beginning or middle, whether the variables are defined or not, or whether the types match, you don't know and you don't care.
Next, you need semantics. This is were context-sensitive grammars come into play. First, let me tell you that in reality, no one actually writes a context sensitive grammar (because parsing it is too difficult), but rather bit pieces of code that the parser calls when parsing the input (called action routines. Although this is not the only way). Formally, however, you can define all you need. For example, to make sure you define a variable before using it, instead of this
executable -> variable equal expression
you have to have something like:
declaration some_code executable -> declaration some_code variable equal expression
more complex though, to make sure the variable in declaration matches the one being calculated.
Anyway, I just wanted to give you the idea. So, all these things are context-sensitive:
Type checking
Number of arguments to function
default value to function
if member exists in obj in code: obj.member
Almost anything that's not like: missing ; or }
I hope you got an idea what are the differences (If you didn't, I'd be more than happy to explain).
So in summary:
Lexer uses a regular grammar to tokenize input
Parser uses a context-free grammar to make sure the program is in correct structure
Semantic analyzer uses a context-sensitive grammar to do type-checking, parameter matching etc etc
It is not necessarily always like that though. This just shows you how each level needs to get more powerful to be able to do more stuff. However, each of the mentioned compiler levels could in fact be more powerful.
For example, one language that I don't remember, used array subscription and function call both with parentheses and therefore it required the parser to go look up the type (context-sensitive related stuff) of the variable and determine which rule (function_call or array_substitution) to take.
If you design a language with lexer that has regular expressions that overlap, then you would need to also look up the context to determine which type of token you are matching.
To get to your question! With the example you mentioned, it is clear that the c++ grammar is not context-free. The language D, I have absolutely no idea, but you should be able to reason about it now. Think of it this way: In a context free grammar, a nonterminal can expand without taking into consideration anything, BUT the structure of the language. Similar to what you said, it expands, without "looking" anywhere else.
A familiar example would be natural languages. For example in English, you say:
sentence -> subject verb object clause
clause -> .... | lambda
Well, sentence and clause are nonterminals here. With this grammar you can create these sentences:
I go there because I want to
or
I jump you that I is air
As you can see, the second one has the correct structure, but is meaningless. As long as a context free grammar is concerned, the meaning doesn't matter. It just expands verb to whatever verb without "looking" at the rest of the sentence.
So if you think D has to at some point check how something was defined elsewhere, just to say the program is structurally correct, then its grammar is not context-free. If you isolate any part of the code and it still can say that it is structurally correct, then it is context-free.
There is a construct in D's lexer:
string ::= q" Delim1 Chars newline Delim2 "
where Delim1 and Delim2 are matching identifiers, and Chars does not contain newline Delim2.
This construct is context sensitive, therefore D's lexer grammar is context sensitive.
It's been a few years since I've worked with D's grammar much, so I can't remember all the trouble spots off the top of my head, or even if any of them make D's parser grammar context sensitive, but I believe they do not. From recall, I would say D's grammar is context free, not LL(k) for any k, and it has an obnoxious amount of ambiguity.
The grammar cannot be context-free if I need I can't tell the type of
an expression just by looking at it.
No, that's flat out wrong. The grammar cannot be context-free if you can't tell if it is an expression just by looking at it and the parser's current state (am I in a function, in a namespace, etc).
The type of an expression, however, is a semantic meaning, not syntactic, and the parser and the grammar do not give a penny about types or semantic validity or whether or not you can have tuples as values or keys in hashmaps, or if you defined that identifier before using it.
The grammar doesn't care what it means, or if that makes sense. It only cares about what it is.
To answer the question of if a programming language is context free you must first decide where to draw the line between syntax and semantics. As an extreme example, it is illegal in C for a program to use the value of some kinds of integers after they have been allowed to overflow. Clearly this can't be checked at compile time, let alone parse time:
void Fn() {
int i = INT_MAX;
FnThatMightNotReturn(); // halting problem?
i++;
if(Test(i)) printf("Weeee!\n");
}
As a less extreme example that others have pointed out, deceleration before use rules can't be enforced in a context free syntax so if you wish to keep your syntax pass context free, then that must be deferred to the next pass.
As a practical definition, I would start with the question of: Can you correctly and unambiguously determine the parse tree of all correct programs using a context free grammar and, for all incorrect programs (that the language requires be rejected), either reject them as syntactically invalid or produce a parse tree that the later passes can identify as invalid and reject?
Given that the most correct spec for the D syntax is a parser (IIRC an LL parser) I strongly suspect that it is in fact context free by the definition I suggested.
Note: the above says nothing about what grammar the language documentation or a given parser uses, only if a context free grammar exists. Also, the only full documentation on the D language is the source code of the compiler DMD.
These answers are making my head hurt.
First of all, the complications with low level languages and figuring out whether they are context-free or not, is that the language you write in is often processed in many steps.
In C++ (order may be off, but that shouldn't invalidate my point):
it has to process macros and other preprocessor stuffs
it has to interpret templates
it finally interprets your code.
Because the first step can change the context of the second step and the second step can change the context of the third step, the language YOU write in (including all of these steps) is context sensitive.
The reason people will try and defend a language (stating it is context-free) is, because the only exceptions that adds context are the traceable preprocessor statements and template calls. You only have to follow two restricted exceptions to the rules to pretend the language is context-free.
Most languages are context-sensitive overall, but most languages only have these minor exceptions to being context-free.

Why can't variable names start with numbers?

I was working with a new C++ developer a while back when he asked the question: "Why can't variable names start with numbers?"
I couldn't come up with an answer except that some numbers can have text in them (123456L, 123456U) and that wouldn't be possible if the compilers were thinking everything with some amount of alpha characters was a variable name.
Was that the right answer? Are there any more reasons?
string 2BeOrNot2Be = "that is the question"; // Why won't this compile?
Because then a string of digits would be a valid identifier as well as a valid number.
int 17 = 497;
int 42 = 6 * 9;
String 1111 = "Totally text";
Well think about this:
int 2d = 42;
double a = 2d;
What is a? 2.0? or 42?
Hint, if you don't get it, d after a number means the number before it is a double literal
It's a convention now, but it started out as a technical requirement.
In the old days, parsers of languages such as FORTRAN or BASIC did not require the uses of spaces. So, basically, the following are identical:
10 V1=100
20 PRINT V1
and
10V1=100
20PRINTV1
Now suppose that numeral prefixes were allowed. How would you interpret this?
101V=100
as
10 1V = 100
or as
101 V = 100
or as
1 01V = 100
So, this was made illegal.
Because backtracking is avoided in lexical analysis while compiling. A variable like:
Apple;
the compiler will know it's a identifier right away when it meets letter 'A'.
However a variable like:
123apple;
compiler won't be able to decide if it's a number or identifier until it hits 'a', and it needs backtracking as a result.
Compilers/parsers/lexical analyzers was a long, long time ago for me, but I think I remember there being difficulty in unambiguosly determining whether a numeric character in the compilation unit represented a literal or an identifier.
Languages where space is insignificant (like ALGOL and the original FORTRAN if I remember correctly) could not accept numbers to begin identifiers for that reason.
This goes way back - before special notations to denote storage or numeric base.
I agree it would be handy to allow identifiers to begin with a digit. One or two people have mentioned that you can get around this restriction by prepending an underscore to your identifier, but that's really ugly.
I think part of the problem comes from number literals such as 0xdeadbeef, which make it hard to come up with easy to remember rules for identifiers that can start with a digit. One way to do it might be to allow anything matching [A-Za-z_]+ that is NOT a keyword or number literal. The problem is that it would lead to weird things like 0xdeadpork being allowed, but not 0xdeadbeef. Ultimately, I think we should be fair to all meats :P.
When I was first learning C, I remember feeling the rules for variable names were arbitrary and restrictive. Worst of all, they were hard to remember, so I gave up trying to learn them. I just did what felt right, and it worked pretty well. Now that I've learned alot more, it doesn't seem so bad, and I finally got around to learning it right.
It's likely a decision that came for a few reasons, when you're parsing the token you only have to look at the first character to determine if it's an identifier or literal and then send it to the correct function for processing. So that's a performance optimization.
The other option would be to check if it's not a literal and leave the domain of identifiers to be the universe minus the literals. But to do this you would have to examine every character of every token to know how to classify it.
There is also the stylistic implications identifiers are supposed to be mnemonics so words are much easier to remember than numbers. When a lot of the original languages were being written setting the styles for the next few decades they weren't thinking about substituting "2" for "to".
Variable names cannot start with a digit, because it can cause some problems like below:
int a = 2;
int 2 = 5;
int c = 2 * a;
what is the value of c? is 4, or is 10!
another example:
float 5 = 25;
float b = 5.5;
is first 5 a number, or is an object (. operator)
There is a similar problem with second 5.
Maybe, there are some other reasons. So, we shouldn't use any digit in the beginnig of a variable name.
The restriction is arbitrary. Various Lisps permit symbol names to begin with numerals.
COBOL allows variables to begin with a digit.
Use of a digit to begin a variable name makes error checking during compilation or interpertation a lot more complicated.
Allowing use of variable names that began like a number would probably cause huge problems for the language designers. During source code parsing, whenever a compiler/interpreter encountered a token beginning with a digit where a variable name was expected, it would have to search through a huge, complicated set of rules to determine whether the token was really a variable, or an error. The added complexity added to the language parser may not justify this feature.
As far back as I can remember (about 40 years), I don't think that I have ever used a language that allowed use of a digit to begin variable names. I'm sure that this was done at least once. Maybe, someone here has actually seen this somewhere.
As several people have noticed, there is a lot of historical baggage about valid formats for variable names. And language designers are always influenced by what they know when they create new languages.
That said, pretty much all of the time a language doesn't allow variable names to begin with numbers is because those are the rules of the language design. Often it is because such a simple rule makes the parsing and lexing of the language vastly easier. Not all language designers know this is the real reason, though. Modern lexing tools help, because if you tried to define it as permissible, they will give you parsing conflicts.
OTOH, if your language has a uniquely identifiable character to herald variable names, it is possible to set it up for them to begin with a number. Similar rule variations can also be used to allow spaces in variable names. But the resulting language is likely to not to resemble any popular conventional language very much, if at all.
For an example of a fairly simple HTML templating language that does permit variables to begin with numbers and have embedded spaces, look at Qompose.
Because if you allowed keyword and identifier to begin with numberic characters, the lexer (part of the compiler) couldn't readily differentiate between the start of a numeric literal and a keyword without getting a whole lot more complicated (and slower).
C++ can't have it because the language designers made it a rule. If you were to create your own language, you could certainly allow it, but you would probably run into the same problems they did and decide not to allow it. Examples of variable names that would cause problems:
0x, 2d, 5555
One of the key problems about relaxing syntactic conventions is that it introduces cognitive dissonance into the coding process. How you think about your code could be deeply influenced by the lack of clarity this would introduce.
Wasn't it Dykstra who said that the "most important aspect of any tool is its effect on its user"?
The compiler has 7 phase as follows:
Lexical analysis
Syntax Analysis
Semantic Analysis
Intermediate Code Generation
Code Optimization
Code Generation
Symbol Table
Backtracking is avoided in the lexical analysis phase while compiling the piece of code. The variable like Apple, the compiler will know its an identifier right away when it meets letter ‘A’ character in the lexical Analysis phase. However, a variable like 123apple, the compiler won’t be able to decide if its a number or identifier until it hits ‘a’ and it needs backtracking to go in the lexical analysis phase to identify that it is a variable. But it is not supported in the compiler.
When you’re parsing the token you only have to look at the first character to determine if it’s an identifier or literal and then send it to the correct function for processing. So that’s a performance optimization.
Probably because it makes it easier for the human to tell whether it's a number or an identifier, and because of tradition. Having identifiers that could begin with a digit wouldn't complicate the lexical scans all that much.
Not all languages have forbidden identifiers beginning with a digit. In Forth, they could be numbers, and small integers were normally defined as Forth words (essentially identifiers), since it was faster to read "2" as a routine to push a 2 onto the stack than to recognize "2" as a number whose value was 2. (In processing input from the programmer or the disk block, the Forth system would split up the input according to spaces. It would try to look the token up in the dictionary to see if it was a defined word, and if not would attempt to translate it into a number, and if not would flag an error.)
Suppose you did allow symbol names to begin with numbers. Now suppose you want to name a variable 12345foobar. How would you differentiate this from 12345? It's actually not terribly difficult to do with a regular expression. The problem is actually one of performance. I can't really explain why this is in great detail, but it essentially boils down to the fact that differentiating 12345foobar from 12345 requires backtracking. This makes the regular expression non-deterministic.
There's a much better explanation of this here.
it is easy for a compiler to identify a variable using ASCII on memory location rather than number .
I think the simple answer is that it can, the restriction is language based. In C++ and many others it can't because the language doesn't support it. It's not built into the rules to allow that.
The question is akin to asking why can't the King move four spaces at a time in Chess? It's because in Chess that is an illegal move. Can it in another game sure. It just depends on the rules being played by.
Originally it was simply because it is easier to remember (you can give it more meaning) variable names as strings rather than numbers although numbers can be included within the string to enhance the meaning of the string or allow the use of the same variable name but have it designated as having a separate, but close meaning or context. For example loop1, loop2 etc would always let you know that you were in a loop and/or loop 2 was a loop within loop1.
Which would you prefer (has more meaning) as a variable: address or 1121298? Which is easier to remember?
However, if the language uses something to denote that it not just text or numbers (such as the $ in $address) it really shouldn't make a difference as that would tell the compiler that what follows is to be treated as a variable (in this case).
In any case it comes down to what the language designers want to use as the rules for their language.
The variable may be considered as a value also during compile time by the compiler
so the value may call the value again and again recursively
Backtracking is avoided in lexical analysis phase while compiling the piece of code. The variable like Apple; , the compiler will know its a identifier right away when it meets letter ‘A’ character in the lexical Analysis phase. However, a variable like 123apple; , compiler won’t be able to decide if its a number or identifier until it hits ‘a’ and it needs backtracking to go in the lexical analysis phase to identify that it is a variable. But it is not supported in compiler.
Reference
There could be nothing wrong with it when comes into declaring variable.but there is some ambiguity when it tries to use that variable somewhere else like this :
let 1 = "Hello world!"
print(1)
print(1)
print is a generic method that accepts all types of variable. so in that situation compiler does not know which (1) the programmer refers to : the 1 of integer value or the 1 that store a string value.
maybe better for compiler in this situation to allows to define something like that but when trying to use this ambiguous stuff, bring an error with correction capability to how gonna fix that error and clear this ambiguity.