I need to write a parser to parse commands. 5 such commands are:
"a=10"
"b=foo"
"c=10,10"
"clear d"
"c push_back 2"
In the case of the first example, set is the command, a is the object and 10 is the value.
What do you think the parser should return for each line above?
Here is my idea:
"a=10" -> SET (COMMAND_ENUM), INT (VALUE_TYPE), "a", ("10")
"b=foo" -> SET (COMMAND_ENUM), STRING (VALUE_TYPE), "b", ("foo")
Is this a good approach? What is the standard approach for this problem? Should I dispatch instead?
I have a function which checks the type associated with an object. For example, a above is of type INT and must be assigned an INT value, otherwise the parser should return or throw an error of some sort. I also have a convert function for converting values from strings to the desired type. These throw if the conversion is not possible. If the parser tries to convert the values from strings to the required type, then it is probably a good idea to return them via a boost::variant.
You need to come up with at least a semi-formal grammar for the command language you want to recognize, since you've left a whole lot of things really vaguely specified (e.g. in b=foo you want b to be a variable name but foo to be a string literal. How do you distinguish them?. Does a sequence of characters represent an identifier if it's on the right side of an assignment, but a literal if it's on the left side? Or does a single character represent an identifier, but multiple characters represent a literal?) In c=10,10 does 10,10 represent a list or a vector? Writing a grammar will at least force you to think about such things, and it will also serve at least as a guide to how to write your parser (at most it will be something that can be automatically translated into your parser).
You're on the right track by thinking of how statements should be represented as Abstract Syntax Trees (ASTs), but you need to take a step backwards and look at what you want in terms of concrete syntax.
Related
What's the most efficient way to parse natural language?
Let "strings" be a map<string, void (*func)(int,char**)> containing strings such as:
Set the alarm for *.
Call *.
Get me an * at * for *.
and their corresponding functions. Now suppose "input" is a string containing a sentence like:
Call David.
How to implement a function such as parse which would take the "input" and match it to one of the strings in the map. Then call its corresponding function, passing it argc and argv containing all the wild card entires (* in strings). What's the most efficient way to implement such a function?
Not sure why this question got a downvote. It's well-posed an non-trivial.
There are plenty of academic approaches to parsing, which are mostly needed for degenerate grammars. "natural language" is perhaps not a well-defined term, and natural languages do have some ambiguity, but such constrained subsets are not problematic.
In this specific example, we see that the different production rules (map entries) are not mutually ambiguous. In fact, the first token is sufficient for disambiguation. And since a std::map is sorted, we can do an efficient O(log N) search for that token.
Hence, we only need to derive the substitutions. Again, we'll ignore the degenerate cases. Nobody is going to bother with "Get me an at at at for at."`, even though it parses unambiguously.
Instead, for substitutions you simply collect tokens until you get the expected next token. Get me an * at * for *. means that the first * gets all tokens up to at, the second * collects tokens up to for, and the final * gets all remaining tokens.
You see that no backtracking is needed. If parsing fails, there simply is no match.
set var1 A
set var2 {A}
Is it possible to check if variable is list in TCL? For var1 and var2 llength gives 1. I am thinking that these 2 variables are considered same. They are both lists with 1 element. Am I right?
Those two things are considered to be entirely identical, and will produce identical bytecode (except for any byte offsets used for indicating where the content of constants are location, which is not information normally exposed to scripts at all so you can ignore it, plus the obvious differences due to variable names). Semantically, braces are a quoting mechanism and not an indicator of a list (or a script, or …)
You need to write your code to not assume that it can look things up by inspecting the type of a value. The type of 123 could be many different things, such as an integer, a list (of length 1), a unicode string or a command name. Tcl's semantics are based on you not asking what the type of a value is, but rather just using commands and having them coerce the values to the right type as required. Tcl's different to many other languages in this regard.
Because of this different approach, it's not easy to answer questions about this in general: the answers get too long with all the different possible cases to be considered in general yet most of it will be irrelevant to what you're really seeking to do. Ask about something specific though, and we'll be able to tell you much more easily.
You can try string is list $var1 but that will accept both of these forms - it will only return false on something that can't syntactically be interpreted as a list, eg. because there is an unmatched bracket like "aa { bb".
I want to parse the argument list of a Lua function call in C++ using Qt (4.8) in order to avoid a dependency to the Lua interpreter. The comma-separated argument list can be assumed to consist only of string literals and numbers. Eventually the result should be available as a QStringList. The tricky part there is to cope with commas that are part of string arguments as well with the fact that string arguments may use single or double quotes. Until I get to a solution (using regular expressions) myself, somebody might already have dealt with that or a similar problem.
Example:
The argument list string
"Foo", "not 'bar'", 'a, b ,c', 42, 1e-8
should be transformed to a string list containing the items
Foo, not 'bar', a, b, c, 42 and 1e-8
(omitting the quotes per item to avoid confusion)
Not familiar with all the possibilities of your arguments, but the examples you mentioned get correctly matched with this: (?<=")[\w',-]*?(?=")|(?<=^'|\s').*(?='(?:,|$))|[\w-]+, as seen here: https://regex101.com/r/rX7fX7/3
The idea is that you write the "difficult" situations in alternations, preferably to the left, while the less difficult solutions to the right. This way, the engine will first check if a problem situation is present before trying to match whole words.
The current regex doesn't work correctly if quotes/doublequotes appear in middle of the arguments, but your examples didn't have such situations.
I am in a C++ class right now so this question will concern itself primarily with that language, though I haven't been able to find any information for any other language either and I have a feeling whatever the answer is it's probably largely cross language.
In C++ unmarked numbers are assumed to be of integral type ('4', for example, is an integer)
various bounding marks allow for the number to be interpreted differently (''4'', for example, is a character, '"4"' a string).
As far as I know, there is only one kind of unary mark: the decimal point. ('4.' is a double).
I would like to create a new unary mark that designates a constant number in the code to be interpreted as a member of a created datatype. More fundamentally, I would like to know what the '.' and ',' and '"', and ''' are (they aren't operators, keywords, or statements, so what are they?) and how the compiler deals with/interprets them.
More information, if you feel it is necessary:
I am trying to make a complex number header that I can include in any project to do complex math. I am aware of the library but it is, IMHO, ugly and if used extensively slows down coding time. Also I'm mostly trying to code this to improve my programming skills. My goal is to be able to declare a complex variable by doing something of the form cmplx num1= 3 + 4i; where '3' and '4' are arbitrary and 'i' is a mark similar to the decimal point which indicates '4' as imaginary.
I would like to create a new unary mark that designates a constant number in the code to be interpreted as a member of a created datatype.
You can use user defined literals that were introduced in C++11. As an example, assuming you have a class type Type and you want to use the num_y syntax, where num is a NumericType, you can do:
Type operator"" _y(NumericType i) {
return Type(i);
}
Live demo
Things like 4, "4" and 4. are all single tokens,
indivisible. There's no way you can add new tokens to the
language. In C++11, it is possible to define user defined
literals, but they still consist of several tokens; for complex,
a much more natural solution would be to support a constant i,
to allow writing things like 4 + 3*i. (But you'd still need
the C++11 support for constexpr for it to be a compile time
constant.)
I have to use a parser and writer in c++, i am trying to implement the functions, however i do not understand what a token is. one of my function/operations is to check to see if there are more tokens to produce
bool Parser::hasMoreTokens()
how exactly do i go about this, please help
SO!
I am opening a text file with text in it, all words are lowercased. How do i go about checking to see if it hasmoretokens?
This is what i have
bool Parser::hasMoreTokens() {
while(source.peek()!=NULL){
return true;
}
return false;
}
Tokens are the output of lexical analysis and the input to parsing. Typically they are things like
numbers
variable names
parentheses
arithmetic operators
statement terminators
That is, roughly, the biggest things that can be unambiguously identified by code that just looks at its input one character at a time.
One note, which you should feel free to ignore if it confuses you: The boundary between lexical analysis and parsing is a little fuzzy. For instance:
Some programming languages have complex-number literals that look, say, like 2+3i or 3.2e8-17e6i. If you were parsing such a language, you could make the lexer gobble up a whole complex number and make it into a token; or you could have a simpler lexer and a more complicated parser, and make (say) 3.2e8, -, 17e6i be separate tokens; it would then be the parser's job (or even the code generator's) to notice that what it's got is really a single literal.
In some programming languages, the lexer may not be able to tell whether a given token is a variable name or a type name. (This happens in C, for instance.) But the grammar of the language may distinguish between the two, so that you'd like "variable foo" and "type name foo" to be different tokens. (This also happens in C.) In this case, it may be necessary for some information to be fed back from the parser to the lexer so that it can produce the right sort of token in each case.
So "what exactly is a token?" may not always have a perfectly well defined answer.
A token is whatever you want it to be. Traditionally (and for
good reasons), language specifications broke the analysis into
two parts: the first part broke the input stream into tokens,
and the second parsed the tokens. (Theoretically, I think you
can write any grammar in only a single level, without using
tokens—or what is the same thing, using individual
characters as tokens. I wouldn't like to see the results of
that for a language like C++, however.) But the definition of
what a token is depends entirely on the language you are
parsing: most languages, for example, treat white space as
a separator (but not Fortran); most languages will predefine
a set of punctuation/operators using punctuation characters, and
not allow these characters in symbols (but not COBOL, where
"abc-def" would be a single symbol). In some cases (including
in the C++ preprocessor), what is a token depends on context, so
you may need some feedback from the parser. (Hopefully not;
that sort of thing is for very experienced programmers.)
One thing is probably sure (unless each character is a token):
you'll have to read ahead in the stream. You typically can't
tell whether there are more tokens by just looking at a single
character. I've generally found it useful, in fact, for the
tokenizer to read a whole token at a time, and keep it until the
parser needs it. A function like hasMoreTokens would in fact
scan a complete token.
(And while I'm at it, if source is an istream:
istream::peek does not return a pointer, but an int.)
A token is the smallest unit of a programming language that has a meaning. A parenthesis (, a name foo, an integer 123, are all tokens. Reducing a text to a series of tokens is generally the first step of parsing it.
A token is usually akin to a word in sponken language. In C++, (int, float, 5.523, const) will be tokens. Is the minimal unit of text which constitutes a semantic element.
When you split a large unit (long string) into a group of sub-units (smaller strings), each of the sub-units (smaller strings) is referred to as a "token". If there are no more sub-units, then you are done parsing.
How do I tokenize a string in C++?
A token is a terminal in a grammar, a sequence of one or more symbol(s) that is defined by the sequence itself, ie it does not derive from any other production defined in the grammar.