how to match only part of the expression to string in ocamllex - ocaml

I have a simple ocamllex program where the rules section looks somewhat like this-
let digits= ['0'-'9']
let variables= 'X'|'Z'
rule addinlist = parse
|['\n'] {addinlist lexbuf;}
| "Inc" '(' variables+ '(' digits+ ')' ')' as ine { !inputstringarray.(!inputstringarrayi) <-ine;
inputstringarrayi := !inputstringarrayi +1;
addinlist lexbuf}
|_ as c
{ printf "Unrecognized character: %c\n" c;
addinlist lexbuf
}
| eof { () }
My question is suppose I want to match Inc(X(7)) such that I can convert it to my abstract syntax which is "Inc of var of int". I want my lexer to give me the separate strings while reading Inc(X(7)) such that I get "Inc" as a diff string (say inb) followed by "X" as a diff string (say inc) n followed by "7" as a diff string (say ind), so that i can play around with these strings inb, inc, & ind, instead of being stuck with a whole string ine, as is given by my program. How to go about this? I hope my question is clear

Related

OCaml + Menhir: How to parse OCaml like tuple-pattern?

I'm a quite beginner of menhir.
I'm wondering how to parse OCaml like tuple-pattern in my own language, which is quite similar to OCaml.
For example, in the expression let a,b,c = ...,
a, b, c should be parsed like Tuple (Var "a", Var "b", Var "c").
But, in following definition of parser, the above example is parsed as Tuple (Tuple (Var "a", Var "b"), Var "c").
I am wondering how to fix the following definition to parse patterns like ocaml.
I have checked OCaml's parser.mly, but I'm not sure how to implement to do that.
I think my definition is similar to OCaml's one ...
What a magic they use ?
%token LPAREN
%token RPAREN
%token EOF
%token COMMA
%left COMMA
%token <string> LIDENT
%token UNDERBAR
%nonassoc below_COMMA
%start <Token.token> toplevel
%%
toplevel:
| p = pattern EOF { p }
pattern:
| p = simple_pattern { p }
| psec = pattern_tuple %prec below_COMMA
{ Ppat_tuple (List.rev psec) }
simple_pattern:
| UNDERBAR { Ppat_any }
| LPAREN RPAREN { Ppat_unit }
| v = lident { Ppat_var v }
| LPAREN p = pattern RPAREN { p }
pattern_tuple:
| seq = pattern_tuple; COMMA; p = pattern { p :: seq }
| p1 = pattern; COMMA; p2 = pattern { [p2; p1] }
lident:
| l = LIDENT { Pident l }
The result is the following:
[~/ocaml/error] menhir --interpret --interpret-show-cst ./parser.mly
File "./parser.mly", line 27, characters 2-42:
Warning: production pattern_tuple -> pattern_tuple COMMA pattern is never reduced.
Warning: in total, 1 productions are never reduced.
LIDENT COMMA LIDENT COMMA LIDENT
ACCEPT
[toplevel:
[pattern:
[pattern_tuple:
[pattern:
[pattern_tuple:
[pattern: [simple_pattern: [lident: LIDENT]]]
COMMA
[pattern: [simple_pattern: [lident: LIDENT]]]
]
]
COMMA
[pattern: [simple_pattern: [lident: LIDENT]]]
]
]
EOF
]
It contains a typical shift-reduce conflict, and you made a mistake at its resolution by specifying precedence. Please open any book about parsing by Yacc and check about shift-reduce conflicts and their resolution.
Let's see it using your rules. Suppose we have the following input and the parser is looking ahead at the second ,:
( p1 , p2 , ...
↑
Yacc is looking at this second COMMA token
Their are two possibilities:
Reduce: take p1 , p2 as a pattern. It uses the second rule of pattern
Shift: consumes the token COMMA and shift the look ahead cursor right to try making the comma separated list longer.
You can see the conflict easily removing %prec below_COMMA from the pattern rule:
$ menhir z.mly # the %prec thing is removed
...
File "z.mly", line 4, characters 0-9:
Warning: the precedence level assigned to below_COMMA is never useful.
Warning: one state has shift/reduce conflicts.
Warning: one shift/reduce conflict was arbitrarily resolved.
Many Yacc documents say in this case Yacc prefers shift, and this default usually matches with human intention, including your case. So one of the solutions is simply drop %prec below_COMMA thing and forget the warning.
If you do not like having shift reduce conflict warnings (That's the spirit!), you can explicitly state which rule should be chosen in this case using precedences, just like OCaml's parser.mly does. (BTW, OCaml's parser.mly is a jewel box of shift reduce resolutions. If you are not familiar, you should check one or two of them.)
Yacc chooses the rule with higher precedence at a shift reduce conflict. For shift, its precedence is the one of the token at the look ahead cursor, which is COMMA in this case. The precedence of reduce can be declared by %prec TOKEN suffix at the corresponding rule. If you do not specify it, I guess the precedence of a rule is undefined, and this is why the shift reduce conflict is warned if you remove %prec below_COMMA.
So now the question is: which has a higher precedence, COMMA or below_COMMA? This should be declared in the preamble of the mly file. (And this is why I have asked the questioner show that part.)
...
%left COMMA
...
%nonassoc below_COMMA
I skip what %left and %nonassoc mean, for all the Yacc books should explain them. Here below_COMMA pseudo token is below COMMA. It means below_COMMA has a higher precedence than COMMA. Therefore, the above example chooses Reduce, and gets ( (p1, p2), ... against the intention.
The correct precedence declaration is the opposite. To let Shift happen, below_COMMA must come above of COMMA, to have a lower precedence:
...
%nonassoc below_COMMA
%left COMMA
...
See OCaml's parser.mly. It exactly does like this. Putting "below thing above" sounds totally crazy, but it is not Menhir's fault. It is Yacc's unfortunate tradition. Blame it. OCaml's parser.mly has a comment already:
The precedences must be listed from low to high.
This is how I would rewrite it:
%token LPAREN
%token RPAREN
%token EOF
%token COMMA
%token LIDENT
%token UNDERBAR
%start toplevel
%%
toplevel:
| p = pattern EOF { p }
pattern:
| p = simple_pattern; tail = pattern_tuple_tail
{ match tail with
| [] -> p
| _ -> Ppat_tuple (p :: tail)
}
pattern_tuple_tail:
| COMMA; p = simple_pattern; seq = pattern_tuple_tail { p :: seq }
| { [] }
simple_pattern:
| UNDERBAR { Ppat_any }
| LPAREN RPAREN { Ppat_unit }
| v = lident { Ppat_var v }
| LPAREN p = pattern RPAREN { p }
lident:
| l = LIDENT { Pident l }
A major change is that a tuple element is no longer a pattern but a simple_pattern. A pattern itself is a comma-separated sequence of one or more elements.
$ menhir --interpret --interpret-show-cst parser.mly
LIDENT COMMA LIDENT COMMA LIDENT
ACCEPT
[toplevel:
[pattern:
[simple_pattern: [lident: LIDENT]]
[pattern_tuple_tail:
COMMA
[simple_pattern: [lident: LIDENT]]
[pattern_tuple_tail:
COMMA
[simple_pattern: [lident: LIDENT]]
[pattern_tuple_tail:]
]
]
]
EOF
]

ocamllex regex syntax error

I have some basic ocamllex code, which was written by my professor, and seems to be fine:
{ type token = EOF | Word of string }
rule token = parse
| eof { EOF }
| [’a’-’z’ ’A’-’Z’]+ as word { Word(word) }
| _ { token lexbuf }
{
(*module StringMap = BatMap.Make(String) in *)
let lexbuf = Lexing.from_channel stdin in
let wordlist =
let rec next l = match token lexbuf with
EOF -> l
| Word(s) -> next (s :: l)
in next []
in
List.iter print_endline wordlist
}
However, running ocamllex wordcount.mll produces
File "wordcount.mll", line 4, character 3: syntax error.
This indicates that there is an error at the first [ in the regex in the fourth line here. What is going on?
You seem to have curly quotes (also called "smart quotes" -- ugh) in your text. You need regular old single quotes.
curly quote: ’
old fashioned single quote: '

How to define unrecognized rules in Ocamlyacc

I'm working on company projet, where i have to create a compilator for a language using Ocamlyacc and Ocamllex. I want to know if is it possible to define a rule in my Ocamlyacc Parser that can tell me that no rules of my grammar matching the syntax of an input.
I have to insist that i'am a beginner in Ocamllex/Ocamlyacc
Thank you a lot for your help.
If no rule in your grammar matches the input, then Parsing.Parse_error exception is raised. Usually, this is what you want.
There is also a special token called error that allows you to resynchronize your parser state. You can use it in your rules, as it was a real token produced by a lexer, cf., eof token.
Also, I would suggest to use menhir instead of more venerable ocamlyacc. It is easier to use and debug, and it also comes with a good library of predefined grammars.
When you write a compiler for a language, the first step is to run your lexer and to check if your program is good from a lexical point of view.
See the below example :
{
open Parser (* The type token is defined in parser.mli *)
exception Eof
}
rule token = parse
[' ' '\t'] { token lexbuf } (* skip blanks *)
| ['\n' ] { EOL }
| ['0'-'9']+ as lxm { INT(int_of_string lxm) }
| '+' { PLUS }
| '-' { MINUS }
| '*' { TIMES }
| '/' { DIV }
| '(' { LPAREN }
| ')' { RPAREN }
| eof { raise Eof }
It's a lexer to recognize some arithmetic expressions.
If your lexer accepts the input then you give the sequence of lexemes to the parser which try to find if a AST can be build with the specified grammar. See :
%token <int> INT
%token PLUS MINUS TIMES DIV
%token LPAREN RPAREN
%token EOL
%left PLUS MINUS /* lowest precedence */
%left TIMES DIV /* medium precedence */
%nonassoc UMINUS /* highest precedence */
%start main /* the entry point */
%type <int> main
%%
main:
expr EOL { $1 }
;
expr:
INT { $1 }
| LPAREN expr RPAREN { $2 }
| expr PLUS expr { $1 + $3 }
| expr MINUS expr { $1 - $3 }
| expr TIMES expr { $1 * $3 }
| expr DIV expr { $1 / $3 }
| MINUS expr %prec UMINUS { - $2 }
;
This is a little program to parse arithmetic expression. A program can be rejected at this step because there is no rule of the grammar to apply in order to have an AST at the end. There is no way to define unrecognized rules but you need to write a grammar which define how a program can be accepted or rejected.
let _ =
try
let lexbuf = Lexing.from_channel stdin in
while true do
let result = Parser.main Lexer.token lexbuf in
print_int result; print_newline(); flush stdout
done
with Lexer.Eof ->
exit 0
If your compile the lexer, the parser and the last program, you have :
1 + 2 is accepted because there is no error lexical errors and an AST can be build corresponding to this expression.
1 ++ 2 is rejected : no lexical errors but there is no rule to build a such AST.
You can found more documentation here : http://caml.inria.fr/pub/docs/manual-ocaml-4.00/manual026.html

OCaml interpreter: evaluate a function inside a function

I'm trying to write an interpreter in OCaml and I have a problem here.
In my program, I want to call a function like this, for example:
print (get_line 4) // print: print to stdout, get_line: get a specific line in a file
How can I do that? The problem is in our parser, I think so as it defines how a program will be run, how a function is defined and the flow of a program. This is what I have so far in parser an lexer (code below), but it didn't seem to work. I don't really see any difference between my code and the calculator on OCaml site, the statement inside the bracket is evaluated firstly, then return its value to its parent operation to do the next evaluating.
In my interpreter, the function get_line inside bracket is evaluate firstly, but I don't think it returns the value to print function, or it does but wrong type (checked, but I don't think it's this error).
One difference between calculator and my interpreter is that the calculator is working with primitive types, mine are functions. But they should be similar.
This is my code, just a part of it:
parser.mly:
%token ODD
%token CUT
%start main
%type <Path.term list> main
%%
main:
| expr EOL main {$1 :: $3}
| expr EOF { [$1] }
| EOL main { $2 }
;
expr:
| ODD INT { Odd $2}
| ODD LPAREN INT RPAREN expr { Odd $3 }
| CUT INT INT { Cut ($2, $3)}
| CUT INT INT expr { Cut ($2, $3) }
lexer.mll:
{
open Parser
}
(* define all keyword used in the program *)
rule main =
parse
| ['\n'] { EOL }
| ['\r']['\n'] { EOL }
| [' ''\t''\n'] { main lexbuf }
| '(' { LPAREN }
| ')' { RPAREN }
| "cut" { CUT }
| "trunclength" { TRUNCLENGTH }
| "firstArithmetic" { FIRSTARITH }
| "f_ArithmeticLength" { F_ARITHLENGTH }
| "secondArithmetic" { SECARITH }
| "s_ArithmeticLength" { S_ARITHLENGTH }
| "odd" { ODD }
| "oddLength" { ODDLENGTH }
| "zip" { ZIP }
| "zipLength" { ZIPLENGTH }
| "newline" { NEWLINE }
| eof { EOF }
| ['0' - '9']+ as lxm { INT(int_of_string lxm) }
| ['a'-'z''A'-'Z'] ['a'-'z''A'-'Z''0'-'9']* as lxm { STRING lxm }
| ODD LPAREN INT RPAREN expr { Odd $3 }
Your grammar rule requires an INT between parenthesis. You need to change that to an expr. There are a number of other issues with this, but I'll leave it at that.
First, you parser only tries to build a list of Path.term, but what do you want to do with it?
Then, there are many things wrong with your parser, so I don't really know where to start. For instance, the second and fourth case of the expr rule totally ignore the last expr. Moreover, your parser only recognize expressions containing "odd <int>" (or "odd (<int>)") and "cut <int> <int>", so how is it supposed to evaluate print and get_line? You should edit your question and try to make it clearer.
To evaluate expressions, you can
do it directly inside the semantic-actions (as in the calculator example),
or (better) build an AST (for Abstract Syntax Tree) with your parser and then interpret it.
If you want to interpret print (get_line 4), your parser need to know what print and get_line mean. In your code, your parser will see print or get_line as a STRING token (having a string value). As they seem to be keywords in your language, your lexer should recognize them and return a specific token.

OCamllex matching beginning of line?

I am messing around writing a toy programming language in OCaml with ocamllex, and was trying to make the language sensitive to indentation changes, python-style, but am having a problem matching the beginning of a line with ocamllex's regex rules. I am used to using ^ to match the beginning of a line, but in OCaml that is the string concat operator. Google searches haven't been turning up much for me unfortunately :( Anyone know how this would work?
I'm not sure if there is explicit support for zero-length matching symbols (like ^ in Perl-style regular expressions, which matches a position rather than a substring). However, you should be able to let your lexer turn newlines into an explicit token, something like this:
parser.mly
%token EOL
%token <int> EOLWS
% other stuff here
%%
main:
EOL stmt { MyStmtDataType(0, $2) }
| EOLWS stmt { MyStmtDataType($1 - 1, $2) }
;
lexer.mll
{
open Parser
exception Eof
}
rule token = parse
[' ' '\t'] { token lexbuf } (* skip other blanks *)
| ['\n'][' ']+ as lxm { EOLWS(String.length(lxm)) }
| ['\n'] { EOL }
(* ... *)
This is untested, but the general idea is:
Treat newlines as staetment 'starters'
Measure whitespace that immediately follows the newline and pass its length as an int
Caveat: you will need to preprocess your input to start with a single \n if it doesn't contain one.