Bison if statements - setting symbol table prior to parsing block statements - c++

In my language I have the ability to declare a variable in the current symbol table scope and also create a if statement which will generate a new symbol table scope for its statements.
stmts : stmt { $$ = new Block(); $$->addStatement($1); }
| stmts stmt { $1->addStatement($2); }
| /*blank*/ { $$ = new Block(); }
;
stmt : vardecl
| ifstmt
;
ifstmt : TIF TLPAREN exprBase TRPAREN TOPENBLOCK stmts TCLOSEBLOCK {
semanticAnalyzerParser->enterScope("if statement scope");
$$ = new IfStatement($3, $7);
}
;
assign : ident ident TASSIGN exprBase {
Var* typeName = $1;
Var* varName = $2;
ExpressionBase* exprBase = $4;
semanticAnalyzerParser->getScope()->registerVariable(typeName->identifier, varName->identifier, exprBase);
$$ = new VarDecl(typeName, varName, exprBase);
}
;
What I'd like to do is set a new scope before bison enters the if statement block of statements. E.g. semanticAnalyzerParser->enterScope("if statement scope");, so that when grammar for declaring variables are recognised it will declare it on the correct scope with semanticAnalyzerParser->getScope()->registerVariable(typeName->identifier, varName->identifier, exprBase);
However, since bison has to recognise the complete grammar of a if statement, it only creates the scope after it finishes parsing and thus registers the variables on the wrong scope.
How can I execute code prior to parsing the stmts part of the ifstmt grammar, so that it can set the correct scope? I know that one option is to walk the AST tree afterwards but I want to avoid this since the ASTs to create in bison are largely determined by information gathered in the semantic analysis.

Usually you would do this with an "embedded" action:
ifstmt : TIF TLPAREN exprBase TRPAREN {
semanticAnalyzerParser->enterScope("if statement scope"); }
TOPENBLOCK stmts TCLOSEBLOCK {
semanticAnalyzerParser->leaveScope("if statement scope");
$$ = new IfStatement($3, $7); }
;
The embedded action will be executed after the first part of the ifstmt is recognized (up to the TRPAREN, with the TOPENBLOCK as lookahead), before the body (stmts) is parsed.

Related

bison error: rule given for Semi, which is a token?

my bison grammar met an error:
parser.yy:
%union {
Ast *ast;
char *str;
int tok;
}
%token <tok> NEWLINE SEMICOLON
%type <ast> Semi
%%
Semi: NEWLINE { $$ = new Ast($1); }
| SEMICOLON { $$ = new Ast($1); }
;
Statements: Statement
| Statement Semi Statements
;
Statement: ...
;
%%
It gives error message:
Parser.yy:xxx.x-x: error: rule given for Semi, which is a token
Is there a way to implement this ?
Or I have to write it like this: ?
Statements: Statement
| Statement NEWLINE Statements
| Statement SEMICOLON Statements
;
Semi is a token. It doesn't need define rule to return.
Just use it.

Expression occurrences in flex/bison

Suppose I have a Bison expression like this:
multiply: T_FIGURE { $$ = $1; }
| multiply T_ASTERISK multiply { $$ = $1 * $3; }
;
It should return a result of multiplying some figures or give the input back if only one figure provided. If I wanted to limit the number of figures provided to at most 3, I would rewrite the expression like this:
multiply: T_FIGURE { $$ = $1; }
| T_FIGURE T_MULTIPLY T_FIGURE { $$ = $1 * $3; }
| T_FIGURE T_MULTIPLY T_FIGURE T_MULTIPLY T_FIGURE { $$ = $1 * $3 * $5; }
;
My question: is there a way to rewrite this expression so that I wouldn't have to manually specify the occurrences and instead use some kind of parameter to be able to easily change the upper limit to, for example, 30 occurrences?
In a word, "No". That is not a feature of bison (nor any yacc derivative I know of).
The easiest way to solve problems like this is to use a code generator. Either use an available macro processor like m4 or write your own script in whatever scripting language you feel comfortable with.
You could also solve the problem dynamically by counting arguments in your semantic action (which means modifying your semantic type to include both a value and a count.) You could then throw a syntax error if the count is exceeded. (Again, in your semantic action.) The main advantage of this approach is that avoids blowing up the parser's state table. If your limits are large and interact with each other, you might find you are producing a very large state machine.
As a very simple example (with only a single operator):
%{
typedef struct ExprNode {
int count;
double value;
} ExprNode;
%}
%union {
ExprNode expr;
double value;
}
%token <value> T_FIGURE
%type <expr> expr multiply
%%
expr: T_FIGURE { $$.count = 0; $$.value = $1; }
multiply: expr
| multiply '*' expr { if ($1.count >= LIMIT) {
yyerror("Too many products");
YYABORT;
}
else {
$$.count = $1.count + 1;
$$.value = $1.value * $3.value;
}
}

How to define unrecognized rules in Ocamlyacc

I'm working on company projet, where i have to create a compilator for a language using Ocamlyacc and Ocamllex. I want to know if is it possible to define a rule in my Ocamlyacc Parser that can tell me that no rules of my grammar matching the syntax of an input.
I have to insist that i'am a beginner in Ocamllex/Ocamlyacc
Thank you a lot for your help.
If no rule in your grammar matches the input, then Parsing.Parse_error exception is raised. Usually, this is what you want.
There is also a special token called error that allows you to resynchronize your parser state. You can use it in your rules, as it was a real token produced by a lexer, cf., eof token.
Also, I would suggest to use menhir instead of more venerable ocamlyacc. It is easier to use and debug, and it also comes with a good library of predefined grammars.
When you write a compiler for a language, the first step is to run your lexer and to check if your program is good from a lexical point of view.
See the below example :
{
open Parser (* The type token is defined in parser.mli *)
exception Eof
}
rule token = parse
[' ' '\t'] { token lexbuf } (* skip blanks *)
| ['\n' ] { EOL }
| ['0'-'9']+ as lxm { INT(int_of_string lxm) }
| '+' { PLUS }
| '-' { MINUS }
| '*' { TIMES }
| '/' { DIV }
| '(' { LPAREN }
| ')' { RPAREN }
| eof { raise Eof }
It's a lexer to recognize some arithmetic expressions.
If your lexer accepts the input then you give the sequence of lexemes to the parser which try to find if a AST can be build with the specified grammar. See :
%token <int> INT
%token PLUS MINUS TIMES DIV
%token LPAREN RPAREN
%token EOL
%left PLUS MINUS /* lowest precedence */
%left TIMES DIV /* medium precedence */
%nonassoc UMINUS /* highest precedence */
%start main /* the entry point */
%type <int> main
%%
main:
expr EOL { $1 }
;
expr:
INT { $1 }
| LPAREN expr RPAREN { $2 }
| expr PLUS expr { $1 + $3 }
| expr MINUS expr { $1 - $3 }
| expr TIMES expr { $1 * $3 }
| expr DIV expr { $1 / $3 }
| MINUS expr %prec UMINUS { - $2 }
;
This is a little program to parse arithmetic expression. A program can be rejected at this step because there is no rule of the grammar to apply in order to have an AST at the end. There is no way to define unrecognized rules but you need to write a grammar which define how a program can be accepted or rejected.
let _ =
try
let lexbuf = Lexing.from_channel stdin in
while true do
let result = Parser.main Lexer.token lexbuf in
print_int result; print_newline(); flush stdout
done
with Lexer.Eof ->
exit 0
If your compile the lexer, the parser and the last program, you have :
1 + 2 is accepted because there is no error lexical errors and an AST can be build corresponding to this expression.
1 ++ 2 is rejected : no lexical errors but there is no rule to build a such AST.
You can found more documentation here : http://caml.inria.fr/pub/docs/manual-ocaml-4.00/manual026.html

Yacc grammar producing incorrect terminal

I've been working on a hobby compiler for a while now, using lex and yacc for the parsing stage. This is all working fine for the majority of things, but when I added in if statements, the production rule for symbols is now giving the previous (or next?) item on the stack instead of the symbol value needed.
Grammar is given below with hopefully unrelated rules taken out:
%{
...
%}
%define parse.error verbose
%token ...
%%
Program:
Function { root->addChild($1);}
;
Function:
Type Identifier '|' ArgumentList '|' StatementList END
{ $$ = new FunctionDef($1, $2, $4, $6); }
/******************************************/
/* Statements and control flow ************/
/******************************************/
Statement:
Expression Delimiter
| VariableDeclaration Delimiter
| ControlFlowStatement Delimiter
| Delimiter
;
ControlFlowStatement:
IfStatement
;
IfStatement:
IF Expression StatementList END { $$ = new IfStatement($2, $3); }
| IF Expression StatementList ELSE StatementList END { $$ = new IfStatement($2, $3, $5);}
;
VariableDeclaration:
Type Identifier { $$ = new VariableDeclaration($1, $2);}
| Type Identifier EQUALS Expression { $$ = new VariableDeclaration($1, $2, $4);}
;
StatementList:
StatementList Statement { $1->addChild($2); }
| Statement { $$ = new GenericList($1); }
;
Delimiter:
';'
| NEWLINE
;
Type:
...
Expression:
...
PostfixExpression:
Value '[' Expression ']' { std::cout << "TODO: indexing operators ([ ])" << std::endl;}
| Value '.' SYMBOL { std::cout << "TODO: member access" << std::endl;}
| Value INCREMENT { $$ = new UnaryExpression(UNARY_POSTINC, $1); }
| Value DECREMENT { $$ = new UnaryExpression(UNARY_POSTDEC, $1); }
| Value '(' ')' { $$ = new FunctionCall($1, NULL); }
| Value '(' ExpressionList ')' { $$ = new FunctionCall($1, $3); }
| Value
;
Value:
BININT { $$ = new Integer(yytext, 2); }
| HEXINT { $$ = new Integer(yytext, 16); }
| DECINT { $$ = new Integer(yytext); }
| FLOAT { $$ = new Float(yytext); }
| SYMBOL { $$ = new Symbol(yytext); }
| STRING { $$ = new String(yytext); }
| LambdaFunction
| '(' Expression ')' { $$ = $2; }
| '[' ExpressionList ']' { $$ = $2;}
;
LambdaFunction:
...
%%
I cannot work out what about the control flow code can make the Symbol:
rule match something that isn't classed as a symbol from the lex definition:
symbol [a-zA-Z_]+(alpha|digit)*
...
{symbol} {return SYMBOL;}
Any help from somebody who knows about yacc and grammars in general would be very much appreciated. Also example files of the syntax it parses can be shown if necessary.
Thanks!
You cannot count on the value of yytext outside of a flex action.
Bison grammars typically read a lookahead token before deciding on how to proceed, so in a bison action, yytext has already been replaced with the token value of the lookahead token. (You can't count on that either, though: sometimes no lookahead token is needed.)
So you need to make a copy of yytext before the flex action returns and make that copy available to the bison grammar by placing it into the yylval semantic union.
See this bison FAQ entry
By the way, the following snippet from your flex file is incorrect:
symbol [a-zA-Z_]+(alpha|digit)*
In that regular expression, alpha and digit are just ordinary strings, so it is the same as [a-zA-Z_]+("alpha"|"digit")*, which means that it will match, for example, a_digitdigitdigit but not a_123. (It would have matched a_digitdigitdigit without the part following the +, so I presume that wasn't your intention.)
On the whole, I think it's better to use Posix character classes than either hand-written character classes or defined symbols, so I would write that as
symbol [[:alpha:]_]([[:alnum:]_]*[[:alnum:]])?
assuming that your intention is that a symbol can start but not end with an underscore, and end but not start with a digit. Using Posix character classes requires you to execute flex with the correct locale -- almost certainly the C locale -- but so do character ranges, so there is nothing to be lost by using the self-documenting Posix classes.
(Of course, I have no idea what your definitions of {alpha} and {digit} are, but it seems to me that they are either the same as [[:alpha:]] and [[:digit:]], in which case they are redundant, or different from the Posix classes, in which case they are confusing to the reader.)

What is $$ in bison?

In the bison manual in section 2.1.2 Grammar Rules for rpcalc, it is written that:
In each action, the pseudo-variable $$ stands for the semantic value
for the grouping that the rule is going to construct. Assigning a
value to $$ is the main job of most actions
Does that mean $$ is used for holding the result from a rule? like:
exp exp '+' { $$ = $1 + $2; }
And what's the typical usage of $$ after begin assigned to?
Yes, $$ is used to hold the result of the rule. After being assigned to, it typically becomes a $x in some higher-level (or lower precedence) rule.
Consider (for example) input like 2 * 3 + 4. Assuming you follow the normal precedence rules, you'd have an action something like: { $$ = $1 * $3; }. In this case, that would be used for the 2 * 3 part and, obviously enough, assign 6 to $$. Then you'd have your { $$ = $1 + $3; } to handle the addition. For this action, $1 would be given the value 6 that you assigned to $$ in the multiplication rule.
Does that mean $$ is used for holding the result from a rule? like:
Yes.
And what's the typical usage of $$ after begin assigned to?
Typically you won’t need that value again. Bison uses it internally to propagate the value. In your example, $1 and $2 are the respective semantic values of the two exp productions, that is, their values were set somewhere in the semantic rule for exp by setting its $$ variable.
Try this. Create a YACC file with:
%token NUMBER
%%
exp: exp '+' NUMBER { $$ = $1 + $3; }
| exp '-' NUMBER { $$ = $1 - $3; }
| NUMBER { $$ = $1; }
;
Then process it using Bison or YACC. I am using Bison but I assume YACC is the same. Then just find the "#line" directives. Let us find the "#line 3" directive; it and the relevant code will look like:
#line 3 "DollarDollar.y"
{ (yyval) = (yyvsp[(1) - (3)]) + (yyvsp[(3) - (3)]); }
break;
And then we can quickly see that "$$" expands to "yyval". That other stuff, such as "yyvsp", is not so obvious but at least "yyval" is.
$$ represents the result reference of the current expression's evaluation. In other word, its result.Therefore, there's no particular usage after its assignation.
Bye !