Accept brackets correctly in bisonc++ - c++

I've tried to write a basic syntax checker using bisonc++
The rules are:
expression -> OPEN_BRACKET expression CLOSE_BRACKET
expression -> expression operator expression
operator -> PLUS
operator -> MINUS
If I try to run the compiled code, I get an error at this line:
(a+b)-(c+d)
The first rule is applied, the leftmost and the rightmost brackets are the OPEN_BRACKET and the CLOSE_BRACKET. The remaining expression is: a+b)-(c+d
How is it possible to prevent this behaviour? Is it possible to count the open and closed brackets?
Edit
The expression grammar:
expression:
OPEN_BRACKET expression CLOSE_BRACKET
{
//
}
| operator
{
//
}
| VARIABLE
{
//
}
;
operator:
expression PLUS expression
{
//
}
| expression MINUS expression
{
//
}
;
Edit2
The lexer
CHAR [a-z]
WS [ \t\n]
%%
{CHAR}+ return Parser::VARIABLE;
"+" return Parser::PLUS;
"-" return Parser::MINUS;
"(" return Parser::OPEN_BRACKET;
")" return Parser::CLOSE_BRACKET;

This is not a normal expression grammar. Try the normal one.
expression
: term
| expression '+' term
| expression '-' term
;
term
: factor
| term '*' factor
| term '/' factor
| term '%' factor
;
factor
: primary
| '-' factor // unary minus
| primary '^' factor // exponentiation, right-associative
;
primary
: identifier
| literal
| '(' expression ')'
;
Note also the above method of indenting and aligning, and that you only have to return yytext[0] from the lexer for single special characters: you don't need special token names, and it's more readable without them:
CHAR [a-zA-Z]
DIGIT [0-9]
WHITESPACE [ \t\r\n]
%%
{CHAR}+ { return Parser::VARIABLE; }
{DIGIT}+ { return Parser::LITERAL; }
{WHITESPACE}+ ;
. { return yytext[0]; }

Your operator rule does not look good.
Try experiment with:
expression:
OPEN_BRACKET expression CLOSE_BRACKET
{
//
}
|
expression operator expression
{
//
}
|
VARIABLE
{
//
}
;
operator:
PLUS
{
//
}
|
MINUS
{
//
}
;
As your pseudo code actually suggests...

Related

Antlr4: Can't understand why breaking something out into a subrule doesn't work

I'm still new at Antlr4, and I have what is probably a really stupid problem.
Here's a fragment from my .g4 file:
assignStatement
: VariableName '=' expression ';'
;
expression
: (value | VariableName)
| bin_op='(' expression ')'
| expression UNARY_PRE_OR_POST
| (UNARY_PRE_OR_POST | '+' | '-' | '!' | '~' | type_cast) expression
| expression MUL_DIV_MOD expression
| expression ADD_SUB expression
;
VariableName
: ( [a-z] [A-Za-z0-9_]* )
;
// Pre or post increment/decrement
UNARY_PRE_OR_POST
: '++' | '--'
;
// multiply, divide, modulus
MUL_DIV_MOD
: '*' | '/' | '%'
;
// Add, subtract
ADD_SUB
: '+' | '-'
;
And my sample input:
myInt = 10 + 5;
myInt = 10 - 5;
myInt = 1 + 2 + 3;
myInt = 1 + (2 + 3);
myInt = 1 + 2 * 3;
myInt = ++yourInt;
yourInt = (10 - 5)--;
The first sample line myInt = 10 + 5; line produces this error:
line 22:11 mismatched input '+' expecting ';'
line 22:14 extraneous input ';' expecting {<EOF>, 'class', '{', 'interface', 'import', 'print', '[', '_', ClassName, VariableName, LITERAL, STRING, NUMBER, NUMERIC_LITERAL, SYMBOL}
I get similar issues with each of the lines.
If I make one change, a whole bunch of errors disappear:
| expression ADD_SUB expression
change it to this:
| expression ('+' | '-') expression
I've tried a bunch of things. I've tried using both lexer and parser rules (that is, calling it add_sub or ADD_SUB). I've tried a variety of combinations of parenthesis.
I tried:
ADD_SUB: [+-];
What's annoying is the pre- and post-increment lines produce no errors as long as I don't have errors due to +-*. Yet they rely on UNARY_PRE_OR_POST. Of course, maybe it's not really using that and it's using something else that just isn't clear to me.
For now, I'm just eliminating the subrule syntax and will embed everything in the main rule. But I'd like to understand what's going on.
So... what is the proper way to do this:
Do not use literal tokens inside parser rules (unless you know what you're doing).
For the grammar:
expression
: '+' expression
| ...
;
ADD_SUB
: '+' | '-'
;
ANTLR will create a lexer rules for the literal '+', making the grammar really look like this:
expression
: T__0 expression
| ...
;
T__0 : '+';
ADD_SUB
: '+' | '-'
;
causing the input + to never become a ADD_SUB token because T__0 will always match it first. That is simply how the lexer operates: try to match as much characters as possible for every lexer rule, and when 2 (or more) match the same amount of characters, let the one defined first "win".
Do something like this instead:
expression
: value
| '(' expression ')'
| expression UNARY_PRE_OR_POST
| (UNARY_PRE_OR_POST | ADD | SUB | EXCL | TILDE | type_cast) expression
| expression (MUL | DIV | MOD) expression
| expression (ADD | SUB) expression
;
value
: ...
| VariableName
;
VariableName
: [a-z] [A-Za-z0-9_]*
;
UNARY_PRE_OR_POST
: '++' | '--'
;
MUL : '*';
DIV : '/';
MOD : '%';
ADD : '+';
SUB : '-';
EXCL : '!';
TILDE : '~';

Changing how Antlr4 prints a tree

I am using Antlr4 in IntelliJ to make a small compiler for arithmetic expressions.
I want to print the tree and use this code snippet to do so.
JFrame frame = new JFrame("Tree");
JPanel panel = new JPanel();
TreeViewer viewr = new TreeViewer(Arrays.asList(
parser.getRuleNames()),tree);
viewr.setScale(2);//scale a little
panel.add(viewr);
frame.add(panel);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setSize(400,400);
frame.setVisible(true);
This makes a tree which looks like this, for the input 3*5\n
Is there a way to adjust this so it reads from top to bottom
Statement
Expression /n
INT * INT
3 5
instead?
My grammar is defined as:
grammar Expression;
statement: expression ENDSTATEMENT # printExpr
| ID '=' expression ENDSTATEMENT # assign
| ENDSTATEMENT # blank
;
expression: expression MULTDIV expression # MulDiv
| expression ADDSUB expression # AddSub
| INT # int
| FLOAT # float
| ID # id
| '(' expression ')' # parens
;
ID : [a-zA-Z]+ ; // match identifiers
INT : [0-9]+ ; // match integers
MULTDIV : ('*' | '/'); //match multiply or divide
ADDSUB : ('+' | '-'); //match add or subtract
FLOAT: INT '.' INT; //match a floating point number
ENDSTATEMENT:'\r'? '\n' ; // return newlines to parser (is end-statement signal)
WHITESPACE : [ \t]+ -> skip ; // ignore whitespace
No, not without changing the source of the tree viewer yourself.

Scala Regex for less than equal to operator (<=)

I am trying to parse an expression with (<, <=, >=, >). All but <= works just fine. Can someone help what could be the issue.
Code:
object MyTestParser extends RegexParsers {
override def skipWhitespace = true
private val expression: Parser[String] = """[a-zA-Z0-9\.]+""".r
val operation: Parser[Try[Boolean]] =
expression ~ ("<" | "<=" | ">=" | ">") ~ expression ^^ {
case v1 ~ op ~ v2 => for {
a <- Try(v1.toDouble)
b <- Try(v2.toDouble)
} yield op match {
case "<" => a < b
case "<=" => a <= b
case ">" => a > b
case ">=" => a >= b
}
}
}
Test:
"MyTestParser" should {
"successfully parse <= condition" in {
val parser = MyTestParser.parseAll(MyTestParser.operation, "10 <= 20")
val result = parser match {
case MyTestParser.Success(s, _) => s.get
case MyTestParser.Failure(e, _) =>
println(s"Parsing failed with error: $e")
false
case MyTestParser.Error(e, _) =>
println(s"Parsing error: $e")
false
}
result === true
}
"successfully parse >= condition" in {
val result = MyTestParser.parseAll(MyTestParser.operation, "50 >= 20").get
result === scala.util.Success(true)
}
}
Error for <= condition:
Parsing failed with error: string matching regex `[a-zA-Z0-9\.]+' expected but `=' found
You need to change the order of the alternatives so that the longest options could be checked first.
expression ~ ( "<=" | ">=" | ">" | "<") ~ expression ^^ {
If the shortest alternative matches first, others are not considered at all.
Also note that a period does not have to be escaped inside a character class, this will do:
"""[a-zA-Z0-9.]+""".r
Your problem is that "<" is matched by <=, so it moves on to trying the expression. If you change the order so that "<=" comes first, that will be matched instead, and you will get the desired result.
#Prateek: it does not work cause the regex engine works just like a boolean OR. It does not search further if one of the patterns in the or-chain is satisfied at a certain point.
So, when use | between patterns, if two or more patterns have substring in common, you have to place the longest first.
As a general rule: order the patterns starting from the longest to the shortest.
Change the relevant line like this make it works:
// It works as expected with '>= / >' also before for the same reason
expression ~ ("<=" | "<" | ">=" | ">") ~ expression ^^ {
Or you want to follow the general rule:
expression ~ ("<=" | ">=" | "<" | ">") ~ expression ^^ {

How to define unrecognized rules in Ocamlyacc

I'm working on company projet, where i have to create a compilator for a language using Ocamlyacc and Ocamllex. I want to know if is it possible to define a rule in my Ocamlyacc Parser that can tell me that no rules of my grammar matching the syntax of an input.
I have to insist that i'am a beginner in Ocamllex/Ocamlyacc
Thank you a lot for your help.
If no rule in your grammar matches the input, then Parsing.Parse_error exception is raised. Usually, this is what you want.
There is also a special token called error that allows you to resynchronize your parser state. You can use it in your rules, as it was a real token produced by a lexer, cf., eof token.
Also, I would suggest to use menhir instead of more venerable ocamlyacc. It is easier to use and debug, and it also comes with a good library of predefined grammars.
When you write a compiler for a language, the first step is to run your lexer and to check if your program is good from a lexical point of view.
See the below example :
{
open Parser (* The type token is defined in parser.mli *)
exception Eof
}
rule token = parse
[' ' '\t'] { token lexbuf } (* skip blanks *)
| ['\n' ] { EOL }
| ['0'-'9']+ as lxm { INT(int_of_string lxm) }
| '+' { PLUS }
| '-' { MINUS }
| '*' { TIMES }
| '/' { DIV }
| '(' { LPAREN }
| ')' { RPAREN }
| eof { raise Eof }
It's a lexer to recognize some arithmetic expressions.
If your lexer accepts the input then you give the sequence of lexemes to the parser which try to find if a AST can be build with the specified grammar. See :
%token <int> INT
%token PLUS MINUS TIMES DIV
%token LPAREN RPAREN
%token EOL
%left PLUS MINUS /* lowest precedence */
%left TIMES DIV /* medium precedence */
%nonassoc UMINUS /* highest precedence */
%start main /* the entry point */
%type <int> main
%%
main:
expr EOL { $1 }
;
expr:
INT { $1 }
| LPAREN expr RPAREN { $2 }
| expr PLUS expr { $1 + $3 }
| expr MINUS expr { $1 - $3 }
| expr TIMES expr { $1 * $3 }
| expr DIV expr { $1 / $3 }
| MINUS expr %prec UMINUS { - $2 }
;
This is a little program to parse arithmetic expression. A program can be rejected at this step because there is no rule of the grammar to apply in order to have an AST at the end. There is no way to define unrecognized rules but you need to write a grammar which define how a program can be accepted or rejected.
let _ =
try
let lexbuf = Lexing.from_channel stdin in
while true do
let result = Parser.main Lexer.token lexbuf in
print_int result; print_newline(); flush stdout
done
with Lexer.Eof ->
exit 0
If your compile the lexer, the parser and the last program, you have :
1 + 2 is accepted because there is no error lexical errors and an AST can be build corresponding to this expression.
1 ++ 2 is rejected : no lexical errors but there is no rule to build a such AST.
You can found more documentation here : http://caml.inria.fr/pub/docs/manual-ocaml-4.00/manual026.html

Yacc grammar producing incorrect terminal

I've been working on a hobby compiler for a while now, using lex and yacc for the parsing stage. This is all working fine for the majority of things, but when I added in if statements, the production rule for symbols is now giving the previous (or next?) item on the stack instead of the symbol value needed.
Grammar is given below with hopefully unrelated rules taken out:
%{
...
%}
%define parse.error verbose
%token ...
%%
Program:
Function { root->addChild($1);}
;
Function:
Type Identifier '|' ArgumentList '|' StatementList END
{ $$ = new FunctionDef($1, $2, $4, $6); }
/******************************************/
/* Statements and control flow ************/
/******************************************/
Statement:
Expression Delimiter
| VariableDeclaration Delimiter
| ControlFlowStatement Delimiter
| Delimiter
;
ControlFlowStatement:
IfStatement
;
IfStatement:
IF Expression StatementList END { $$ = new IfStatement($2, $3); }
| IF Expression StatementList ELSE StatementList END { $$ = new IfStatement($2, $3, $5);}
;
VariableDeclaration:
Type Identifier { $$ = new VariableDeclaration($1, $2);}
| Type Identifier EQUALS Expression { $$ = new VariableDeclaration($1, $2, $4);}
;
StatementList:
StatementList Statement { $1->addChild($2); }
| Statement { $$ = new GenericList($1); }
;
Delimiter:
';'
| NEWLINE
;
Type:
...
Expression:
...
PostfixExpression:
Value '[' Expression ']' { std::cout << "TODO: indexing operators ([ ])" << std::endl;}
| Value '.' SYMBOL { std::cout << "TODO: member access" << std::endl;}
| Value INCREMENT { $$ = new UnaryExpression(UNARY_POSTINC, $1); }
| Value DECREMENT { $$ = new UnaryExpression(UNARY_POSTDEC, $1); }
| Value '(' ')' { $$ = new FunctionCall($1, NULL); }
| Value '(' ExpressionList ')' { $$ = new FunctionCall($1, $3); }
| Value
;
Value:
BININT { $$ = new Integer(yytext, 2); }
| HEXINT { $$ = new Integer(yytext, 16); }
| DECINT { $$ = new Integer(yytext); }
| FLOAT { $$ = new Float(yytext); }
| SYMBOL { $$ = new Symbol(yytext); }
| STRING { $$ = new String(yytext); }
| LambdaFunction
| '(' Expression ')' { $$ = $2; }
| '[' ExpressionList ']' { $$ = $2;}
;
LambdaFunction:
...
%%
I cannot work out what about the control flow code can make the Symbol:
rule match something that isn't classed as a symbol from the lex definition:
symbol [a-zA-Z_]+(alpha|digit)*
...
{symbol} {return SYMBOL;}
Any help from somebody who knows about yacc and grammars in general would be very much appreciated. Also example files of the syntax it parses can be shown if necessary.
Thanks!
You cannot count on the value of yytext outside of a flex action.
Bison grammars typically read a lookahead token before deciding on how to proceed, so in a bison action, yytext has already been replaced with the token value of the lookahead token. (You can't count on that either, though: sometimes no lookahead token is needed.)
So you need to make a copy of yytext before the flex action returns and make that copy available to the bison grammar by placing it into the yylval semantic union.
See this bison FAQ entry
By the way, the following snippet from your flex file is incorrect:
symbol [a-zA-Z_]+(alpha|digit)*
In that regular expression, alpha and digit are just ordinary strings, so it is the same as [a-zA-Z_]+("alpha"|"digit")*, which means that it will match, for example, a_digitdigitdigit but not a_123. (It would have matched a_digitdigitdigit without the part following the +, so I presume that wasn't your intention.)
On the whole, I think it's better to use Posix character classes than either hand-written character classes or defined symbols, so I would write that as
symbol [[:alpha:]_]([[:alnum:]_]*[[:alnum:]])?
assuming that your intention is that a symbol can start but not end with an underscore, and end but not start with a digit. Using Posix character classes requires you to execute flex with the correct locale -- almost certainly the C locale -- but so do character ranges, so there is nothing to be lost by using the self-documenting Posix classes.
(Of course, I have no idea what your definitions of {alpha} and {digit} are, but it seems to me that they are either the same as [[:alpha:]] and [[:digit:]], in which case they are redundant, or different from the Posix classes, in which case they are confusing to the reader.)