Error: Camlp4: Uncaught exception: Not_found - ocaml

I am working on an Ocsigen example (http://ocsigen.org/tuto/manual/macaque).
I get an error when trying to compile the program, as follows.
File "testDB.ml", line 15, characters 14-81 (end at line 18, character 4):
While finding quotation "table" in a position of "expr":
Available quotation expanders are:
svglist (in a position of expr)
svg (in a position of expr)
html5list (in a position of expr)
html5 (in a position of expr)
xhtmllist (in a position of expr)
xhtml (in a position of expr)
Camlp4: Uncaught exception: Not_found
My code is:
module Lwt_thread = struct
include Lwt
include Lwt_chan
end
module Lwt_PGOCaml = PGOCaml_generic.Make(Lwt_thread)
module Lwt_Query = Query.Make_with_Db(Lwt_thread)(Lwt_PGOCaml)
let get_db : unit -> unit Lwt_PGOCaml.t Lwt.t =
let db_handler = ref None in
fun () ->
match !db_handler with
| Some h -> Lwt.return h
| None -> Lwt_PGOCaml.connect ~database:"testbase" ()
let table = <:table< users (
login text NOT NULL,
password text NOT NULL
) >>
..........
I used eliom-destillery to generate the basic files.
I used "make" to compile the program.
I've tried many different things and done a google search but I can't figure out the problem. Any hints are greatly appreciated.

Generally speaking, the error message indicates that CamlP4 does not know the quotation you used, here table, which is used in your code as <:table< ... >>. The quotations can be added by CamlP4 extensions pa_xxx.cmo (or pa_xxx.cma) modules. Unless you made a typo of the quotation name, you failed to load an extension which provides it to CamlP4.
According to http://ocsigen.org/tuto/manual/macaque , Macaque (or its underlying libraries? I am not sure since I have never used it) provides the quotation table. So you have to instruct CamlP4 to load the corresponding extension. I believe the vanilla eliom-destillery is minimum for the basic eliom programming and does not cover for the extensions for Macaque.
Actually the document http://ocsigen.org/tuto/manual/macaque points out it:
We need to reference macaque in the Makefile :
SERVER_PACKAGE := macaque.syntax
This should be the CamlP4 syntax extension name required for table.

Related

Opening modules from the OCaml compiler without building a custom toplevel

I would like to have a few lines of code at the start of my OCaml input file to have toplevel remember the last expression typed all the time, under the name of it. I.e., I want to have:
# 3 + 4;;
val it : int = 7
# it;;
val it : int = 7
# let foo = 42;;
val foo : int = 42
# it + 130;;
val it : int = 137
#
But I don't want to build a custom toplevel or use camlp5 or anything fancy like that.
What I currently do (in OCaml version 4.02.3, I don't know why I have that version; but I hope the exact version doesn't matter?) is the following:
#directory "+compiler-libs";;
#load "/opt/src/ocaml-4.02.3/utils/warnings.cmo";;
#load "/opt/src/ocaml-4.02.3/parsing/location.cmo";;
let convert_phrase x =
match x with
| Parsetree.Ptop_def
[{Parsetree.pstr_desc = Parsetree.Pstr_eval (e, a)}] ->
Parsetree.Ptop_def
([{Parsetree.pstr_desc =
Parsetree.Pstr_value (Asttypes.Nonrecursive,
[{Parsetree.pvb_pat =
{Parsetree.ppat_desc =
Parsetree.Ppat_var (Location.mknoloc "it");
Parsetree.ppat_loc = Location.none;
Parsetree.ppat_attributes = []};
Parsetree.pvb_expr = e;
Parsetree.pvb_attributes = a;
Parsetree.pvb_loc = Location.none}]);
Parsetree.pstr_loc = Location.none}])
| x -> x;;
Toploop.parse_toplevel_phrase :=
let parse_toplevel_phrase = !Toploop.parse_toplevel_phrase in
fun x -> convert_phrase (parse_toplevel_phrase x);;
And that kind of works.
My question: if I just do the #directory "+compiler-libs";; thing, I can access the Toploop and Parsetree modules, but I cannot access the Location module! What is the reason for that? I find having to load .cmo files from my source directories very unattractive.
So is there a way to do what I want without having to have a source tree available?
Or, in other words: why the difference between Toploop and Location, in this respect?
In short, what you should load is not individual .cmo files but
#load "ocamlcommon.cma";;
which is in +compiler-libs directory.
The differences between Parsetree, Toploop and Location are subtle...
In OCaml, data types and their constructors become accessible only with adding its directory to the load path (by #directory "<dir>"). No object code loading (by #load) is required for them.
Parsetree is so called "mli only module": it has only data type definitions and no values are defined. Therefore everything in Parsetree is accessible only by putting it into the load path.
Location defines types and values. Its data types and constructors are accessible without loading the object file but values require the loading. In this case, the object location.cmo is loaded when you load ocamlcommon.cma which archives it.
Toploop is a tricky one. You can access the values of Toploop even without loading toploop.cmo, since Toploop is linked and already available in OCaml toplevel.

How to import Shakespearean Templates in Yesod?

I was using QuasiQuotations in Yesod, and everything worked fine. BUT my file became very large and not nice to look at. Also, my TextEditor does not highlight this syntax correctly. That is why is split my files like so:
getHomeR :: Handler Html
getHomeR = do
webSockets chatApp
defaultLayout $ do
$(luciusFile "templates/chat.lucius")
$(juliusFile "templates/chat.julius")
$(hamletFile "templates/chat.hamlet")
If this is wrong, please do tell. Doing runghc myFile.hs throws many errors like this:
chat_new.hs:115:9:
Couldn't match expected type ‘t0 -> Css’
with actual type ‘WidgetT App IO a0’
The lambda expression ‘\ _render_ajFK
-> (shakespeare-2.0.7:Text.Css.CssNoWhitespace . (foldr ($) ...))
...’
has one argument,
but its type ‘WidgetT App IO a0’ has none
In a stmt of a 'do' block:
\ _render_ajFK
...
And this.
chat_new.hs:116:9:
Couldn't match type ‘(url0 -> [(Text, Text)] -> Text)
-> Javascript’
with ‘WidgetT App IO a1’
Expected type: WidgetT App IO a1
Actual type: JavascriptUrl url0
Probable cause: ‘asJavascriptUrl’ is applied to too few arguments
...
And also one for the HTML (Hamlet).
Thus, one per template.
It seems that hamletFile and others treat templates as self-contained, while yours are referencing something from each other. You can play with order of *File calls, or use widgetFile* from Yesod.Default.Util module:
$(widgetFileNoReload def "chat")
The Reload variant is useful for development - it would make yesod devel to watch for file changes and reload them.

R: Countrycode package not supporting regex as the origin

I have a list of countries that i need to convert into standardized format (iso3c). Some have long names, others have 2 or 3 digit codes, and others do not display the whole country name like "Africa" instead of "South Africa". Ive done some research and come up to use countrycode package in R. However, when i tried to use "regex" R doesnt seem to recognize it. Im getting the error below:
> countrycode(data,"regex","iso3c", warn = TRUE)
Error in countrycode(data, "regex", "iso3c", :
Origin code not supported
Any other option I need to do?
Thanks!
You can view the README for the countrycode package here https://github.com/vincentarelbundock/countrycode, or you can pull up the help file in R by entering this into your R console ?countrycode::countrycode.
"regex" is not a valid 'origin' value (2nd argument in the countrycode() function). You must use one of "cowc", "cown", "eurostat", "fao", "fips105", "imf", "ioc", "iso2c", "iso3c", "iso3n", "p4_ccode", "p4_scode", "un", "wb", "wb_api2c", "wb_api3c", "wvs", "country.name", "country.name.de" (using latest version 0.19).
If you use either of the following 'origin' values, regex matching will be performed automatically: "country.name" or "country.name.de"
If you're using a custom dictionary with the new (as of version 0.19) custom_dict argument, you must set the origin_regex argument to TRUE for regex matching to occur.
In your example, this should do what you want:
countrycode(data, origin = "country.name", destination = "iso3c", warn = TRUE)

verbose error with ocamlyacc

In bison, it is sufficient to add
%verbose-error
to the file to make the parser errors more verbose. Is there any way to gain similar functionality with ocamlyacc?
Here is the answer for a similar question, but I could not make anything out of it. This is how I call the lexer and parser functions:
let rec foo () =
try
let line = input_line stdin in
(try
let _ = (Parser.latexstatement lexer_token_safe (Lexing.from_string line)) in
print_string ("SUCCESS\n")
with
LexerException s -> print_string ("$L" ^ line ^ "\n")
| Parsing.Parse_error -> print_string ("$P" ^ line ^ "\n")
| _ -> print_string ("$S " ^ line ^ "\n"));
flush stdout;
foo ();
with
End_of_file -> ()
;;
foo ();;
I don't think that there's an option in ocamlyacc to do what you want automatically, so let me try to provide below a through description of what could be done to handle syntactic errors and have more useful messages. Maybe it is not what you asked.
Errors must actually be separated in lexical and parse errors, depending on which stage of the parsing process the error happens in.
In mll files, a Failure exception will be raised in case of unexpected patterns
in mly files, it's a Parsing.Parse_error exception which will be generated
So you have several solutions:
let the lexer and parser code raise their exceptions, and catch them in the code calling them
implement the specific cases of errors in the either of them with
a catch all rule for the lexer (or some more specific patterns if necessary)
using the error special terminal in the parser rules to catch errors in specific places
In any case, you will have to make functions to get information about the position of the error in the source.
Lexing and Parsing both use a location record, defined in Lexing, with the following fields:
pos_fname : the name of the file currently processed
pos_lnum : the line number in the file
pos_bol : the character number from the start of the file at the beginning of the line
pos_cnum : the character number at the current position
The lexbuf variable used by the lexer has two values like that to track the current token being lexed (lexeme_start_p and lexeme_curr_p in Lexing let you access these data). And the parser has four to track the current symbol (or non-terminal) about to be synthetized, and the current rule items, which can be retrieved with Parsing functions (rhs_start_pos and rhs_end_pos, as well as symbol_start_pos and symbol_end_pos).
Here's a few functions to generate more detailed exceptions:
exception LexErr of string
exception ParseErr of string
let error msg start finish =
Printf.sprintf "(line %d: char %d..%d): %s" start.pos_lnum
(start.pos_cnum -start.pos_bol) (finish.pos_cnum - finish.pos_bol) msg
let lex_error lexbuf =
raise ( LexErr (error (lexeme lexbuf) (lexeme_start_p lexbuf) (lexeme_end_p lexbuf)))
let parse_error msg nterm =
raise ( ParseErr (error msg (rhs_start_p nterm) (rhs_end_p nterm)))
and some basic use case:
parser:
%token ERR
/* ... */
wsorword:
WS { $1 }
| WORD { $1 }
| error { parse_error "wsorword" 1; ERR "" } /* a token needed for typecheck */
;
lexer:
rule lexer = parse
(* ... *)
(* catch all pattern *)
| _ { lex_error lexbuf }
All that would be left to do is to modify your top level function to catch the exceptions and process them.
Finally, for debugging purposes, there is a set_trace function available in Parsing which enable the display messages of the state machine used by the parsing engine: it traces all the internal state changes of the automaton.
In the Parsing module (you can check it here) there is the function Parsing.set_trace that will do just that. You can use it as:Parsing.set_trace True to enable. Also, you can run ocamlyacc with the -v argument and it will output a .output, listing all states and trasitions.

Feed ocamlyacc parser from explicit token list?

Is it possible to feed an OCamlYacc-generated parser an explicit token list for analysis?
I'd like to use OCamlLex to explicitly generate a token list which I then analyze using a Yacc-generated parser later. However, the standard use case generates a parser that calls a lexer implicitly for the next token. Here tokens are computed during the yacc analysis rather than before. Conceptually a parser should only work on tokens but a Yacc-generated parser provides an interface that relies on a lexer which in my case I don't need.
As already mentioned by Jeffrey, Menhir specifically offers, as part of its runtime library, a module to the parsers with any kind of token stream (it just asks for a unit -> token function): MenhirLib.Convert.
(You could even use this code without using Menhir, with ocamlyacc instead. In practice the conversion is not terribly complicated so you could even re-implement it yourself.)
If you already have a list of tokens, you can just go the ugly way and ignore the lexing buffer altogether. After all, the parse-from-lexbuf function that your parser expects is a non-pure function :
let my_tokens = ref [ (* WHATEVER *) ]
let token lexbuf =
match !my_tokens with
| [] -> EOF
| h :: t -> my_tokens := t ; h
let ast = Parser.parse token (Lexbuf.from_string "")
On the other hand, it looks from your comments that you actually have a function of type Lexing.lexbuf -> token list that you're trying to fit into the Lexing.lexbuf -> token signature of your parser. If that is the case, you can easily use a queue to write a converter between the two types:
let deflate token =
let q = Queue.create () in
fun lexbuf ->
if not (Queue.is_empty q) then Queue.pop q else
match token lexbuf with
| [ ] -> EOF
| [tok] -> tok
| hd::t -> List.iter (fun tok -> Queue.add tok q) t ; hd
let ast = Parser.parse (deflate my_lexer) lexbuf
The OCamlYacc interface does look pretty complicated; it seems to require a Lexing.lexbuf. Maybe you could consider using Lexing.from_string to feed a fixed string rather than a fixed sequence of tokens. You could also look at Menhir. I haven't used it, but it gets excellent reviews here whenever anybody mentions OCaml parser generators. It might have a more flexible lexing interface.