Changing the State of Lexing.lexbuf - ocaml

I am writing a lexer for Brainfuck with Ocamllex, and to implement its loop, I need to change the state of lexbuf so it can returns to a previous position in the stream.
Background info on Brainfuck (skippable)
in Brainfuck, a loop is accomplished by a pair of square brackets with
the following rule:
[ -> proceed and evaluate the next token
] -> if the current cell's value is not 0, return to the matching [
Thus, the following code evaluates to 15:
+++ [ > +++++ < - ] > .
it reads:
In the first cell, assign 3 (increment 3 times)
Enter loop, move to the next cell
Assign 5 (increment 5 times)
Move back to the first cell, and subtract 1 from its value
Hit the closing square bracket, now the current cell (first) is equals to 2, thus jumps back to [ and proceed into the loop again
Keep going until the first cell is equals to 0, then exit the loop
Move to the second cell and output the value with .
The value in the second cell would have been incremented to 15
(incremented by 5 for 3 times).
Problem:
Basically, I wrote two functions to take care of pushing and popping the last position of the last [ in the header section of brainfuck.mll file, namely push_curr_p and pop_last_p which pushes and pops the lexbuf's current position to a int list ref named loopstack:
{ (* Header *)
let tape = Array.make 100 0
let tape_pos = ref 0
let loopstack = ref []
let push_curr_p (lexbuf: Lexing.lexbuf) =
let p = lexbuf.Lexing.lex_curr_p in
let curr_pos = p.Lexing.pos_cnum in
(* Saving / pushing the position of `[` to loopstack *)
( loopstack := curr_pos :: !loopstack
; lexbuf
)
let pop_last_p (lexbuf: Lx.lexbuf) =
match !loopstack with
| [] -> lexbuf
| hd :: tl ->
(* This is where I attempt to bring lexbuf back *)
( lexbuf.Lexing.lex_curr_p <- { lexbuf.Lexing.lex_curr_p with Lexing.pos_cnum = hd }
; loopstack := tl
; lexbuf
)
}
{ (* Rules *)
rule brainfuck = parse
| '[' { brainfuck (push_curr_p lexbuf) }
| ']' { (* current cell's value must be 0 to exit the loop *)
if tape.(!tape_pos) = 0
then brainfuck lexbuf
(* this needs to bring lexbuf back to the previous `[`
* and proceed with the parsing
*)
else brainfuck (pop_last_p lexbuf)
}
(* ... other rules ... *)
}
The other rules work just fine, but it seems to ignore [ and ]. The problem is obviously at the loopstack and how I get and set lex_curr_p state. Would appreciate any leads.

lex_curr_p is meant to keep track of the current position, so that you can use it in error messages and the like. Setting it to a new value won't make the lexer actually seek back to an earlier position in the file. In fact I'm 99% sure that you can't make the lexer loop like that no matter what you do.
So you can't use ocamllex to implement the whole interpreter like you're trying to do. What you can do (and what ocamllex is designed to do) is to translate the input stream of characters into a stream of tokens.
In other languages that means translating a character stream like var xyz = /* comment */ 123 into a token stream like VAR, ID("xyz"), EQ, INT(123). So lexing helps in three ways: it finds where one token ends and the next begins, it categorizes tokens into different types (identifiers vs. keywords etc.) and discards tokens you don't need (white space and comments). This can simplify further processing a lot.
Lexing Brainfuck is a lot less helpful as all Brainfuck tokens only consist of a single character anyway. So finding out where each token ends and the next begins is a no-op and finding out the type of the token just means comparing the character against '[', '+' etc. So the only useful thing a Brainfuck lexer does is to discard whitespace and comments.
So what your lexer would do is turn the input [,[+-. lala comment ]>] into something like LOOP_START, IN, LOOP_START, INC, DEC, OUT, LOOP_END, MOVE_RIGHT, LOOP_END, where LOOP_START etc. belong to a discriminated union that you (or your parser generator if you use one) defined and made the lexer output.
If you want to use a parser generator, you'd define the token types in the parser's grammar and make the lexer produce values of those types. Then the parser can just parse the token stream.
If you want to do the parsing by hand, you'd call the lexer's token function by hand in a loop to get all the tokens. In order to implement loops, you'd have to store the already-consumed tokens somewhere to be able to loop back. In the end it'd end up being more work than just reading the input into a string, but for a learning exercise I suppose that doesn't matter.
That said, I would recommend going all the way and using a parser generator to create an AST. That way you don't have to create a buffer of tokens for looping and having an AST actually saves you some work (you no longer need a stack to keep track of which [ belongs to which ]).

Related

Removing everything between nested parentheses

For removing everything between parentheses, currently i use:
SELECT
REGEXP_REPLACE('(aaa) bbb (ccc (ddd) / eee)', "\\([^()]*\\)", "");
Which is incorrect, because it gives bbb (ccc / eee), as that removes inner parentheses only.
How to remove everynting between nested parentheses? so expected result from this example is bbb
In case of Google BigQuery, this is only possible if you know your maximum number of nestings. Because it uses re2 library that doesn't support regex recursions.
let r = /\((?:(?:\((?:[^()])*\))|(?:[^()]))*\)/g
let s = "(aaa) bbb (ccc (ddd) / eee)"
console.log(s.replace(r, ""))
If you can iterate on the regular expression operation until you reach a fixed point you can do it like this:
repeat {
old_string = string
string := remove_non_nested_parens_using_regex(string)
} until (string == old_string)
For instance if we have
((a(b)) (c))x)
on the first iteration we remove (b) and (c): sequences which begin with (, end with ) and do not contain parentheses, matched by \([^()]*\). We end up with:
((a) )x)
Then on the next iteration, (a) is gone:
( )x)
and after one more iteration, ( ) is gone:
x)
when we try removing more parentheses, there is no more change, and so the algorithm terminates with x).

how to safely discard an unsused variable in OCAML

Here's the part of my code :
(* Read the input file *)
let a = input_char inc in
(* Check if a is a number *)
if char_is_number a then
(* Read the second letter *)
let b = input_char inc in
(* Discard the space *)
input_char inc;
Where inc is a input_channel. It's reading from a .map file ( by the way, if you have good libraries that I am unaware of that can handle .map file I would gladly take it ) input_char will read the next character.
Basically, I'm reading 1 number and a character. The 3rd should be a space ( I will be doing these verification later ) and will be discarded.
My current code raises a warning, saying the last line should be unit
Is there a safe/elegant/right way to discard the next character read?
To ignore the return value of an expression, you simply need to use ignore function which serves exactly this purpose.
let b = input_char inc in
ignore(input_char inc);
To parse sufficiently complex files, you probably should consider OCamllex + Menhir, especially if you ever used lex/flex & yacc/bison.
While ignore will do what you want, it looks like using the wildcard pattern, _, might suit you better in this case, since you're otherwise assigning to "variables".
Consider
let b = input_char inc in
let _ = input_char inc in
let c = input_char inc in
...
vs
let b = input_char inc in
ignore (input_char inc);
let c = input_char inc in
...
The wildcard pattern, which you might have come across when using match, matches anything and then simply discards the value without binding it to a name. You can use any pattern with the let <pattern> in <expression> construct.

Ocaml - How can I convert value of int variable to string?

I want to make string "t1", "t2","t3", ...so on.
so I did like this
let i =0;...
(something) -> let z = "t" ^ string_of_int (i+1)
my intention is every time that program enters (something), i increases.
because I have to make new string when program enters (something).
but it had syntax error
What should I do?
If you want to embed an integer as part of a string, use Printf.sprintf:
let make_string i =
Printf.sprintf "t%d" i
You'll have to take care of tht part that generates increasing numbers with something else, like a for loop.

regex match string starting at offset

I'm learning Rust and trying to write a simple tokenizer right now. I want to go through a string running each regular expression against the current position in the string, create a token, then skip ahead and repeat until I've processed the whole string. I know I can put them into a larger regex and loop through captures, but I need to process them individually for domain reseasons.
However, I see nowhere in the regex crate that allows an offset so I can begin matching again at specific point.
extern crate regex;
use regex::Regex;
fn main() {
let input = "3 + foo/4";
let ident_re = Regex::new("[a-zA-Z][a-zA-Z0-9]*").unwrap();
let number_re = Regex::new("[1-9][0-9]*").unwrap();
let ops_re = Regex::new(r"[+-*/]").unwrap();
let ws_re = Regex::new(r"[ \t\n\r]*").unwrap();
let mut i: usize = 0;
while i < input.len() {
// Here check each regex to see if a match starting at input[i]
// if so copy the match and increment i by length of match.
}
}
Those regexs that I'm currently scaning for will actually vary at runtime too. Sometimes I may only be looking for a few of them while others (at top level) I might be looking for almost all of them.
The regex crate works on string slices. You can always take a sub-slice of another slice and then operate on that one. Instead of moving along indices, you can modify the variable that points to your slice to point to your subslice.
fn main() {
let mut s = "hello";
while !s.is_empty() {
println!("{}", s);
s = &s[1..];
}
}
Note that the slice operation slices at byte-positions, not utf8-char-positions. This allows the slicing operation to be done in O(1) instead of O(n), but will also cause the program to panic if the indices you are slicing from and to happen to be in the middle of a multi-byte utf8 character.

Is trailing white space is forbidden in s-expression?

When I try sexplib, it tells me
Sexp.of_string " a";; is correct.
Sexp.of_string "a ";; is wrong.
Is trailing white space is forbidden in sexp?
Why?
According to an informal grammar specification, whitespaces should be ignored on both ends of an atom:
{2 Syntax Specification of S-expressions}
{9 Lexical conventions of S-expression}
Whitespace, which consists of space, newline, carriage return,
horizontal tab and form feed, is ignored unless within an
OCaml-string, where it is treated according to OCaml-conventions. The
semicolon introduces comments. Comments are ignored, and range up to
the next newline character. The left parenthesis opens a new list,
the right parenthesis closes it again. Lists can be empty. The
double quote denotes the beginning and end of a string following the
lexical conventions of OCaml (see OCaml-manual for details). All
characters other than double quotes, left- and right parentheses, and
whitespace are considered part of a contiguous string.
Indeed, you can read an atom with a trailing whitespace from a file without any errors.
The error is thrown from a function Pre_sexp.of_string_bigstring in a case when a parser successfully returns, but something was left in a buffer. So the main question is why did something has left in the buffer. It seems that there exists several parsers, and files and string are parsed with different parsers.
I've examined parse_atom rule defined at pre_sexp.ml:699 (all locations are for this commit ) and discovered that when the trailing whitespace is hit, the bump_found_atom is called. Then, if something is on stack, the position indicator is incremented and parsing continues. Otherwise, parsing is finished, but the position is not incremented. With a simple patch this can be fixed:
diff --git a/lib/pre_sexp.ml b/lib/pre_sexp.ml
index 86603f3..9690c0f 100644
--- a/lib/pre_sexp.ml
+++ b/lib/pre_sexp.ml
## -502,7 +502,7 ## let mk_cont_parser cont_parse = (); fun _state str ~max_pos ~pos ->
let pbuf_str = Buffer.contents pbuf in \
let atom = MK_ATOM in \
match GET_PSTACK with \
- | [] -> Done (atom, mk_parse_pos state pos) \
+ | [] -> Done (atom, mk_parse_pos state (pos + 1)) \
| rev_sexp_lst :: sexp_stack -> \
Buffer.clear pbuf; \
let pstack = (atom :: rev_sexp_lst) :: sexp_stack in \
After this patch, the following code produces an expected 'a', 'a', 'a' output:
let s1 = Sexp.of_string " a" in
let s2 = Sexp.of_string "a " in
let s3 = Sexp.of_string " a " in
printf "'%s', '%s', '%s'\n"
(Sexp.to_string s1)
(Sexp.to_string s2)
(Sexp.to_string s3);