Translating a grammar file in a hand written parser - c++

I've been trying to write my own compiler for educational purposes and I'm stuck on an issue. I've taken the recursive descent approach with some previous knowledge on lex and yacc/bison.
So far I'm just trying to handle the parsing aspect without regards to the generation of the AST or code generation.
I'm trying to write the expression parsing for this particular grammar file part
primary_expression
: IDENTIFIER
| CONSTANT
| STRING_LITERAL
| '(' expression ')'
;
postfix_expression
: primary_expression
| postfix_expression '[' expression ']'
| postfix_expression '(' ')'
| postfix_expression '(' argument_expression_list ')'
| postfix_expression '.' IDENTIFIER
| postfix_expression PTR_OP IDENTIFIER
| postfix_expression INC_OP
| postfix_expression DEC_OP
;
So far I have this code
void Parser::primaryExpression()
{
if (accept(Token::eIdentifier))
{
}
else if (accept(Token::eIntNumber))
{
}
else if (accept('('))
{
expression();
expect(')');
}
}
void Parser::postfixExpression()
{
}
I'm having some problems dealing with the recursiveness of the postfix_expression and I don't know how to continue with postfixExpression function.
I'm under the impression that for a recursive descent parser, I should probably arrange my grammar in a different way.
Could anyone point me in the right direction?

Left-recursion is difficult to handle in an LL (recursive descent) parser -- you need to recognize at and change it into a loop rather than a recursive call. In general terms, you want to refactor the left-recursion into
A → α | A β
and then your recusive descent routine becomes
parseA() {
parseAlpha();
while (lookaheadMatchesBeta())
parseBeta();
}
Note that this requires enough lookahead to distinguish between FIRST(β) and FOLLOW(A), in order to find the end of all the trailing things that can match β
This is the same as the process for eliminating left recursion in an LL grammar -- you are effectively replacing the rule above with
A → α A'A'→ ε | β A'
and then replacing the tail-recursive call in parseAPrime with a loop and inlining it into parseA.
Doing that with your grammar and using the accept/expect technique your code above uses, you get something like:
void Parser::postfixExpression() {
primaryExpression();
while (true) {
if (accept('[')) {
expression();
expect(']');
} else if (accept('(')) {
if (accept(')')) {
} else {
argumentExpressionList();
expect(')'); }
} else if (accept('.')) {
⋮
} else if (accept(Token::DEC_OP)) {
} else {
break;
}
}
}

Note that postfix_expression always parses primary_expression first, so the first order of business is primaryExpression().
Then, if the next character is any of the characters that follow the recursive postfix_expression in the remaining seven rules, then you are parsing the postfix_expression. Which gets you another posfix_expression, so you repeat again.
I won't write the C++ code for you, but in pseudocode:
postfixExpression()
{
primaryExpression();
while (next character is any of the characters that follow
postfix_expression in the remaining seven rules)
{
parse_the_appropriate_rule();
}
}

Related

How to code nextToken() function for a descent recursive parser LL(1)

I'm writting a recursive descent parser LL(1) in C++, but I have a problem because I don't know exactly how to get the next token. I know I have to use regular expressions for getting a terminal but I don't know how to get the largest next token.
For example, this lexical and this grammar (without left recursion, left factoring and without cycles):
//LEXICAL IN FLEX
TIME [0-9]+
DIRECTION UR|DR|DL|UL|U|D|L|R
ACTION A|J|M
%%
{TIME} {printf("TIME"); return (TIME);}
{DIRECTION} {printf("DIRECTION"); return (DIRECTION);}
{ACTION} {printf("ACTION"); return (ACTION);}
"~" {printf("RELEASED"); return (RELEASED);}
"+" {printf("PLUS_OP"); return (PLUS_OP);}
"*" {printf("COMB_OP"); return (COMB_OP);}
//GRAMMAR IN BISON
command : list_move PLUS_OP list_action
| list_move COMB_OP list_action
| list_move list_action
| list_move
| list_action
;
list_move: move list_move_prm
;
list_move_prm: move
| move list_move_prm
| ";"
;
list_action: ACTION list_action_prm
;
list_action_prm: PLUS_OP ACTION list_action_prm
| COMB_OP ACTION list_action_prm
| ACTION list_action_prm
| ";" //epsilon
;
move: TIME RELEASED DIRECTION
| RELEASED DIRECTION
| DIRECTION
;
I have a string that contains: "D DR R + A" it should validate it, but getting "DR" I have problems because "D" it's a token too, I don't know how to get "DR" instead "D".
There are a number of ways of hand-writing a tokenizer
you can use a recusive descent LL(1) parser "all the way down" -- rewrite your grammar in terms of single characters rather than tokens, and left factor it. Then your nextToken() routine becomes just getchar(). You'll end up with additional rules like:
TIME: DIGIT more_digits ;
more_digits: /* epsilon */ | DIGIT more_digits ;
DIGIT: '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' ;
DIRECTION: 'U' dir_suffix | 'D' dir_suffix | 'L' | 'R' ;
dir_suffix: /* epsilon */ | 'L' | 'R' ;
You can use regexes. Generally this means keeping around a buffer and reading the input into it. nextToken() then runs a series of regexes on the buffer, figuring out which one returns the longest token and returns that, advancing the buffer as needed.
You can do what flex does -- this is the buffer approach above, combined with building a single DFA that evaluates all of the regexes simultaneously. Running this DFA on the buffer then returns the longest token (based on the last accepting state reached before getting an error).
Note that in all cases, you'll need to consider how to handle whitespace as well. You can just ignore whitespace everywhere (FORTRAN style) or you can allow whitespace between some tokens, but not others (eg, not between the digits of TIME or within a DIRECTION, but everywhere else in the grammar). This can make the grammar much more complex (and the process of hand-writing the recursive descent parser much more tedious).
“I don't know exactly how to get the next token”
Your input comes from a stream (std::istream). You must write a get_token(istream) function (or a tokenizer class). The function must first discard white spaces, then read a character (or more if necessary) analyze it and returns the associated token. The following functions will help you achieve your goal:
ws – discards white-space.
istream::get – reads a character.
istream::putback – puts back in the stream a character (think “undo get”).
"I don't know how to get "DR" instead "D""
Both "D" and "DR" are words. Just read them as you would read a word: is >> word. You will also need a keyword to token map (see std::map). If you read the "D" string, you can ask the map what the associated token is. If not found, throw an exception.
A starting point (run it):
#include <iostream>
#include <iomanip>
#include <map>
#include <string>
enum token_t
{
END,
PLUS,
NUMBER,
D,
DR,
R,
A,
// ...
};
// ...
using keyword_to_token_t = std::map < std::string, token_t >;
keyword_to_token_t kwtt =
{
{"A", A},
{"D", D},
{"R", R},
{"DR", DR}
// ...
};
// ...
std::string s;
int n;
// ...
token_t get_token( std::istream& is )
{
char c;
std::ws( is ); // discard white-space
if ( !is.get( c ) ) // read a character
return END; // failed to read or eof
// analyze the character
switch ( c )
{
case '+': // simple token
return PLUS;
case '0': case '1': // rest of digits
is.putback( c ); // it starts with a digit: it must be a number, so put it back
is >> n; // and let the library to the hard work
return NUMBER;
//...
default: // keyword
is.putback( c );
is >> s;
if ( kwtt.find( s ) == kwtt.end() )
throw "keyword not found";
return kwtt[ s ];
}
}
int main()
{
try
{
while ( get_token( std::cin ) )
;
std::cout << "valid tokens";
}
catch ( const char* e )
{
std::cout << e;
}
}

How to define unrecognized rules in Ocamlyacc

I'm working on company projet, where i have to create a compilator for a language using Ocamlyacc and Ocamllex. I want to know if is it possible to define a rule in my Ocamlyacc Parser that can tell me that no rules of my grammar matching the syntax of an input.
I have to insist that i'am a beginner in Ocamllex/Ocamlyacc
Thank you a lot for your help.
If no rule in your grammar matches the input, then Parsing.Parse_error exception is raised. Usually, this is what you want.
There is also a special token called error that allows you to resynchronize your parser state. You can use it in your rules, as it was a real token produced by a lexer, cf., eof token.
Also, I would suggest to use menhir instead of more venerable ocamlyacc. It is easier to use and debug, and it also comes with a good library of predefined grammars.
When you write a compiler for a language, the first step is to run your lexer and to check if your program is good from a lexical point of view.
See the below example :
{
open Parser (* The type token is defined in parser.mli *)
exception Eof
}
rule token = parse
[' ' '\t'] { token lexbuf } (* skip blanks *)
| ['\n' ] { EOL }
| ['0'-'9']+ as lxm { INT(int_of_string lxm) }
| '+' { PLUS }
| '-' { MINUS }
| '*' { TIMES }
| '/' { DIV }
| '(' { LPAREN }
| ')' { RPAREN }
| eof { raise Eof }
It's a lexer to recognize some arithmetic expressions.
If your lexer accepts the input then you give the sequence of lexemes to the parser which try to find if a AST can be build with the specified grammar. See :
%token <int> INT
%token PLUS MINUS TIMES DIV
%token LPAREN RPAREN
%token EOL
%left PLUS MINUS /* lowest precedence */
%left TIMES DIV /* medium precedence */
%nonassoc UMINUS /* highest precedence */
%start main /* the entry point */
%type <int> main
%%
main:
expr EOL { $1 }
;
expr:
INT { $1 }
| LPAREN expr RPAREN { $2 }
| expr PLUS expr { $1 + $3 }
| expr MINUS expr { $1 - $3 }
| expr TIMES expr { $1 * $3 }
| expr DIV expr { $1 / $3 }
| MINUS expr %prec UMINUS { - $2 }
;
This is a little program to parse arithmetic expression. A program can be rejected at this step because there is no rule of the grammar to apply in order to have an AST at the end. There is no way to define unrecognized rules but you need to write a grammar which define how a program can be accepted or rejected.
let _ =
try
let lexbuf = Lexing.from_channel stdin in
while true do
let result = Parser.main Lexer.token lexbuf in
print_int result; print_newline(); flush stdout
done
with Lexer.Eof ->
exit 0
If your compile the lexer, the parser and the last program, you have :
1 + 2 is accepted because there is no error lexical errors and an AST can be build corresponding to this expression.
1 ++ 2 is rejected : no lexical errors but there is no rule to build a such AST.
You can found more documentation here : http://caml.inria.fr/pub/docs/manual-ocaml-4.00/manual026.html

Yacc grammar producing incorrect terminal

I've been working on a hobby compiler for a while now, using lex and yacc for the parsing stage. This is all working fine for the majority of things, but when I added in if statements, the production rule for symbols is now giving the previous (or next?) item on the stack instead of the symbol value needed.
Grammar is given below with hopefully unrelated rules taken out:
%{
...
%}
%define parse.error verbose
%token ...
%%
Program:
Function { root->addChild($1);}
;
Function:
Type Identifier '|' ArgumentList '|' StatementList END
{ $$ = new FunctionDef($1, $2, $4, $6); }
/******************************************/
/* Statements and control flow ************/
/******************************************/
Statement:
Expression Delimiter
| VariableDeclaration Delimiter
| ControlFlowStatement Delimiter
| Delimiter
;
ControlFlowStatement:
IfStatement
;
IfStatement:
IF Expression StatementList END { $$ = new IfStatement($2, $3); }
| IF Expression StatementList ELSE StatementList END { $$ = new IfStatement($2, $3, $5);}
;
VariableDeclaration:
Type Identifier { $$ = new VariableDeclaration($1, $2);}
| Type Identifier EQUALS Expression { $$ = new VariableDeclaration($1, $2, $4);}
;
StatementList:
StatementList Statement { $1->addChild($2); }
| Statement { $$ = new GenericList($1); }
;
Delimiter:
';'
| NEWLINE
;
Type:
...
Expression:
...
PostfixExpression:
Value '[' Expression ']' { std::cout << "TODO: indexing operators ([ ])" << std::endl;}
| Value '.' SYMBOL { std::cout << "TODO: member access" << std::endl;}
| Value INCREMENT { $$ = new UnaryExpression(UNARY_POSTINC, $1); }
| Value DECREMENT { $$ = new UnaryExpression(UNARY_POSTDEC, $1); }
| Value '(' ')' { $$ = new FunctionCall($1, NULL); }
| Value '(' ExpressionList ')' { $$ = new FunctionCall($1, $3); }
| Value
;
Value:
BININT { $$ = new Integer(yytext, 2); }
| HEXINT { $$ = new Integer(yytext, 16); }
| DECINT { $$ = new Integer(yytext); }
| FLOAT { $$ = new Float(yytext); }
| SYMBOL { $$ = new Symbol(yytext); }
| STRING { $$ = new String(yytext); }
| LambdaFunction
| '(' Expression ')' { $$ = $2; }
| '[' ExpressionList ']' { $$ = $2;}
;
LambdaFunction:
...
%%
I cannot work out what about the control flow code can make the Symbol:
rule match something that isn't classed as a symbol from the lex definition:
symbol [a-zA-Z_]+(alpha|digit)*
...
{symbol} {return SYMBOL;}
Any help from somebody who knows about yacc and grammars in general would be very much appreciated. Also example files of the syntax it parses can be shown if necessary.
Thanks!
You cannot count on the value of yytext outside of a flex action.
Bison grammars typically read a lookahead token before deciding on how to proceed, so in a bison action, yytext has already been replaced with the token value of the lookahead token. (You can't count on that either, though: sometimes no lookahead token is needed.)
So you need to make a copy of yytext before the flex action returns and make that copy available to the bison grammar by placing it into the yylval semantic union.
See this bison FAQ entry
By the way, the following snippet from your flex file is incorrect:
symbol [a-zA-Z_]+(alpha|digit)*
In that regular expression, alpha and digit are just ordinary strings, so it is the same as [a-zA-Z_]+("alpha"|"digit")*, which means that it will match, for example, a_digitdigitdigit but not a_123. (It would have matched a_digitdigitdigit without the part following the +, so I presume that wasn't your intention.)
On the whole, I think it's better to use Posix character classes than either hand-written character classes or defined symbols, so I would write that as
symbol [[:alpha:]_]([[:alnum:]_]*[[:alnum:]])?
assuming that your intention is that a symbol can start but not end with an underscore, and end but not start with a digit. Using Posix character classes requires you to execute flex with the correct locale -- almost certainly the C locale -- but so do character ranges, so there is nothing to be lost by using the self-documenting Posix classes.
(Of course, I have no idea what your definitions of {alpha} and {digit} are, but it seems to me that they are either the same as [[:alpha:]] and [[:digit:]], in which case they are redundant, or different from the Posix classes, in which case they are confusing to the reader.)

OCaml interpreter: evaluate a function inside a function

I'm trying to write an interpreter in OCaml and I have a problem here.
In my program, I want to call a function like this, for example:
print (get_line 4) // print: print to stdout, get_line: get a specific line in a file
How can I do that? The problem is in our parser, I think so as it defines how a program will be run, how a function is defined and the flow of a program. This is what I have so far in parser an lexer (code below), but it didn't seem to work. I don't really see any difference between my code and the calculator on OCaml site, the statement inside the bracket is evaluated firstly, then return its value to its parent operation to do the next evaluating.
In my interpreter, the function get_line inside bracket is evaluate firstly, but I don't think it returns the value to print function, or it does but wrong type (checked, but I don't think it's this error).
One difference between calculator and my interpreter is that the calculator is working with primitive types, mine are functions. But they should be similar.
This is my code, just a part of it:
parser.mly:
%token ODD
%token CUT
%start main
%type <Path.term list> main
%%
main:
| expr EOL main {$1 :: $3}
| expr EOF { [$1] }
| EOL main { $2 }
;
expr:
| ODD INT { Odd $2}
| ODD LPAREN INT RPAREN expr { Odd $3 }
| CUT INT INT { Cut ($2, $3)}
| CUT INT INT expr { Cut ($2, $3) }
lexer.mll:
{
open Parser
}
(* define all keyword used in the program *)
rule main =
parse
| ['\n'] { EOL }
| ['\r']['\n'] { EOL }
| [' ''\t''\n'] { main lexbuf }
| '(' { LPAREN }
| ')' { RPAREN }
| "cut" { CUT }
| "trunclength" { TRUNCLENGTH }
| "firstArithmetic" { FIRSTARITH }
| "f_ArithmeticLength" { F_ARITHLENGTH }
| "secondArithmetic" { SECARITH }
| "s_ArithmeticLength" { S_ARITHLENGTH }
| "odd" { ODD }
| "oddLength" { ODDLENGTH }
| "zip" { ZIP }
| "zipLength" { ZIPLENGTH }
| "newline" { NEWLINE }
| eof { EOF }
| ['0' - '9']+ as lxm { INT(int_of_string lxm) }
| ['a'-'z''A'-'Z'] ['a'-'z''A'-'Z''0'-'9']* as lxm { STRING lxm }
| ODD LPAREN INT RPAREN expr { Odd $3 }
Your grammar rule requires an INT between parenthesis. You need to change that to an expr. There are a number of other issues with this, but I'll leave it at that.
First, you parser only tries to build a list of Path.term, but what do you want to do with it?
Then, there are many things wrong with your parser, so I don't really know where to start. For instance, the second and fourth case of the expr rule totally ignore the last expr. Moreover, your parser only recognize expressions containing "odd <int>" (or "odd (<int>)") and "cut <int> <int>", so how is it supposed to evaluate print and get_line? You should edit your question and try to make it clearer.
To evaluate expressions, you can
do it directly inside the semantic-actions (as in the calculator example),
or (better) build an AST (for Abstract Syntax Tree) with your parser and then interpret it.
If you want to interpret print (get_line 4), your parser need to know what print and get_line mean. In your code, your parser will see print or get_line as a STRING token (having a string value). As they seem to be keywords in your language, your lexer should recognize them and return a specific token.

Bison & rec2c: Get Current line number

I'm dealing with PHP grammar and I want to pass to my function the line number
I have something like:
internal_functions_in_yacc:
T_ISSET '(' isset_variables ')'
| T_EMPTY '(' variable ')'
| T_INCLUDE expr { observers.IncludeFound($2); }
| T_INCLUDE_ONCE expr { observers.IncludeFound($2); }
| T_EVAL '(' expr ')'
| T_REQUIRE expr { observers.IncludeFound($2); }
| T_REQUIRE_ONCE expr { observers.IncludeFound($2); }
;
Now I want to pass line number, something like
T_REQUIRE_ONCE expr { observers.IncludeFound($2,$line_number_here); }
Is there a way to know line number of the token that bison is parsing? Or is it something that have to be done in lexing?
EDIT
I found lexing is done using rec2c not lex.
If line numbers are enabled then they can be accessed using #n with n being the tokens location.
T_REQUIRE_ONCE expr { observers.IncludeFound($2,#2.first_line); }
Edit:
To expand on the answer %locations is the directive in the link that enables line numbers in bison. The lexer is still responsible for incrementing the line numbers and requires %option yylineno.
Lex File:
\n { yylloc->lines(yyleng); yylloc->step(); }