Reverse algorithm to reverse polish notation - rpn

I know how RPN works, i.e. we have input:
a + b * c
and output is
bc*a+
Is it easy to create algorithm which shows me what is the last computation of this equation? I mean that I want to know that first operation is + on two sentences: "a" and "b*c".
Numerical equations are not connected with my problem but with logical sentences. For example I have a logical sentence:
p&(q|r)
and I need to divide this firstly to
p,(q|r) with &operator
and second sentence:
q,r with |operator
I need to create some kind of parser or something? Is it possible to achieve it relatively easy?

The question is somewhat unclear, but I try to answer the 'is it easy to create an algorithm which shows me what is the last computation'.
Think of rpn as a stack based method of calculation: so bc*a+ means push (the value of) b on the stack, push c on the stack, apply * to the two topmost elements and push the result on the top of the stack. Then push a on the stack and apply + to the two topmost elements and push the result on the stack. So the last computation an rpn formula does is the right-most operator.
Logical sentences are exacty the same thing, just replace * with & and + with |.

Related

Regex for finding the index of Mathematical Operators

I am writing a program that takes a string of a mathematical expression, converts it to postfix notation. Which I have done, I am now trying to figure out how to evaluate this expression. I have left it in the data type of a queue, so my idea is to try and find the index of the first operator, then find the two numbers the come before it, send those to a new function which will evaluate them based on which operator is found (so one function for adding, one for subtracting ... etc). Im having trouble figuring out how to grab the index though.
Im trying to use the Queue method of indexOf and then passing it a regex for those operators. using \\W
y is a Queue. i've never used this type of character code before.
var z = y.indexOf("[\\W]")
i would like it to return the index of the first operator, in the case i currently have it is a "+"
currently that doesent find anything. i've also tried dropping those brackets an example Queue is
Queue(-1, 2, 3, *, +, 10, +)
Queue(1, 2, +)
which does mean i need a way to differ if its just a - or if its tied to the number. These are all Strings inside of the Queue
You have to
y.indexWhere(_.matches("\\W"))

Dynamically Allocating an Array from a Polynomial Function String

So I have a polynomial addition problem like the one below:
(1*x+2*x^3-1*x+7)+(1+1*x^2-1*x+1*x^4)
I need to figure out how to extract the numbers for the coefficients and exponents and enter them into a dynamically allocated 2D array (from here I can sort them and add them together before outputting the answer).
I am pretty lost on how to do this because the polynomials can be in any order of degrees and include any amount of terms. I can dynamically allocate them after I extract all of the numbers. The part I need help on is:
Extracting all of the numbers
Differentiating between them to see if it is a coefficient or an exponent
Allowing this to happen for any number of terms
If anyone could answer this or at least point me in the right direction it would be appreciated.
Your problem looks like its parsing and evaluation.
Step1: You need to parse the string assuming an infix expression, so
that you can pull out the coefficient
Step2: push those coefficients into a vector/deque etc to perform the
polynomial calculation.
Here are some good examples:
Evaluating arithmetic expressions from string in C++
What is the best way to evaluate mathematical expressions in C++?
To extract coefficients from a string you need to create a parser. You can use special library like boost.spirit, you can use special tool which builds parsers like Flex, or you can make your own manually using regular expressions or not.
To store coeficients you can use std::vector<int> using indexes as power of x, so for 1*x+2*x^3-1*x+7 your vector would have data:
{ 7, -1, 0, 2 } // 7*x^0 + -1*x^1 + 0*x^2 + 2*x^3
then you do not need to sort them to add coefficients. To store all polynoms you would use std::vector<std::vector<int>> accordingly.

Lookahead with postfix language

Let's say I have a postfix language like
3 2 result + // equivalent to result = 3 + 2
result 1 result + // equivalent to ++result
how should I implement a lookahead for a recursive descent parser (I'm doing this in C++)?
I'm unsure on how to design such a parsing algorithm since I can't deduce the type of instruction from just the first token
I would say you don't really need any lookahead at all, just the current token.
Push the current token on to a stack, and when you reach end of the line (which could be its own token) then you look at the top of the stack to see what the operation is (and of course pop it from the stack). Then pop the number of operands needed by the operation. If there are more entries on the stack after this you, or if there's not enough operands, then you have an error.

Function to count an expression

Is there any function in C++ (included in some library or header file), that would count an expression from string?
Let's say we have a string, which equals 2 + 3 * 8 - 5 (but it's taken from user's keyboard, so we don't know what expression exactly it will be while writing the code), and we want this function co count it, but of course in the correct order (1. power / root 2. times / divide 3. increase / decrease).
I've tried to take all the numbers to an array of ints and operators to an array of chars (okay, actually vectors, because I don't know how many numbers and operators it's going to contain), but I'm not sure what to do next.
Note, that I'm asking if there's any function already written for that, if not I'm going to just try it again.
By count, I'm taking that to mean "evaluate the expression".
You'll have to use a parser generator like boost::spirit to do this properly. If you try hand-writing this I guarantee pain and misery for all involved.
Try looking on here for calculator apps, there are several:
http://boost-spirit.com/repository/applications/show_contents.php
There are also some simple calculator-style grammars in the boost::spirit examples.
Quite surprisingly, others before me have not redirected you to this particular algorithm which is easy to implement and converts your string into this special thing called Reverse Polish Notation (RPN). Computing expressions in RPN is easy, the hard part is implementing Shunting Yard but it is something done many times before and you may find many tutorials on the subject.
Quick overview of the algorithms:
PRN - RPN is a way to write expressions that eliminates the need for parentheses, thus it allows easier computation. Practically speaking, to compute one such expression you walk the string from left to right while keeping a stack of operands. Whenever you meet an operand, you push it onto the stack. Whenever you meet an operation token, you calculate its result on the last 2 operands (if the operation is binary ofcourse, on the last only if it is unary) and push it onto the stack. Rinse and repeat until the end of string.
Shunting Yard really is much harder to simply overview and if I do try to accomplish this task, this answer will end up looking much like the wikipedia article I linked above so I'll save that trouble to both of us.
Tl;DR; Go read the links in the first sentence.
You will have to do this yourself as there is no standard library that handles infix equation resolution
If you do, please include the code for what you are trying and we can help you

Simple spell checking algorithm

I've been tasked with creating a simple spell checker for an assignment but have given next to no guidance so was wondering if anyone could help me out. I'm not after someone to do the assignment for me, but any direction or help with the algorithm would be awesome! If what I'm asking is not within the guildlines of the site then I'm sorry and I'll look elsewhere. :)
The project loads correctly spelled lower case words and then needs to make spelling suggestions based on two criteria:
One letter difference (either added or subtracted to get the word the same as a word in the dictionary). For example 'stack' would be a suggestion for 'staick' and 'cool' would be a suggestion for 'coo'.
One letter substitution. So for example, 'bad' would be a suggestion for 'bod'.
So, just to make sure I've explained properly.. I might load in the words [hello, goodbye, fantastic, good, god] and then the suggestions for the (incorrectly spelled) word 'godd' would be [good, god].
Speed is my main consideration here so while I think I know a way to get this work, I'm really not too sure about how efficient it'll be. The way I'm thinking of doing it is to create a map<string, vector<string>> and then for each correctly spelled word that's loaded in, add the correctly spelled work in as a key in the map and the populate the vector to be all the possible 'wrong' permutations of that word.
Then, when I want to look up a word, I'll look through every vector in the map to see if that word is a permutation of one of the correctly spelled word. If it is, I'll add the key as a spelling suggestion.
This seems like it would take up HEAPS of memory though, cause there would surely be thousands of permutations for each word? It also seems like it'd be very very slow if my initial dictionary of correctly spelled words was large?
I was thinking that maybe I could cut down time a bit by only looking in the keys that are similar to the word I'm looking at. But then again, if they're similar in some way then it probably means that the key will be a suggestion meaning I don't need all those permutations!
So yeah, I'm a bit stumped about which direction I should look in. I'd really appreciate any help as I really am not sure how to estimate the speed of the different ways of doing things (we haven't been taught this at all in class).
The simpler way to solve the problem is indeed a precomputed map [bad word] -> [suggestions].
The problem is that while the removal of a letter creates few "bad words", for the addition or substitution you have many candidates.
So I would suggest another solution ;)
Note: the edit distance you are describing is called the Levenshtein Distance
The solution is described in incremental step, normally the search speed should continuously improve at each idea and I have tried to organize them with the simpler ideas (in term of implementation) first. Feel free to stop whenever you're comfortable with the results.
0. Preliminary
Implement the Levenshtein Distance algorithm
Store the dictionnary in a sorted sequence (std::set for example, though a sorted std::deque or std::vector would be better performance wise)
Keys Points:
The Levenshtein Distance compututation uses an array, at each step the next row is computed solely with the previous row
The minimum distance in a row is always superior (or equal) to the minimum in the previous row
The latter property allow a short-circuit implementation: if you want to limit yourself to 2 errors (treshold), then whenever the minimum of the current row is superior to 2, you can abandon the computation. A simple strategy is to return the treshold + 1 as the distance.
1. First Tentative
Let's begin simple.
We'll implement a linear scan: for each word we compute the distance (short-circuited) and we list those words which achieved the smaller distance so far.
It works very well on smallish dictionaries.
2. Improving the data structure
The levenshtein distance is at least equal to the difference of length.
By using as a key the couple (length, word) instead of just word, you can restrict your search to the range of length [length - edit, length + edit] and greatly reduce the search space.
3. Prefixes and pruning
To improve on this, we can remark than when we build the distance matrix, row by row, one world is entirely scanned (the word we look for) but the other (the referent) is not: we only use one letter for each row.
This very important property means that for two referents that share the same initial sequence (prefix), then the first rows of the matrix will be identical.
Remember that I ask you to store the dictionnary sorted ? It means that words that share the same prefix are adjacent.
Suppose that you are checking your word against cartoon and at car you realize it does not work (the distance is already too long), then any word beginning by car won't work either, you can skip words as long as they begin by car.
The skip itself can be done either linearly or with a search (find the first word that has a higher prefix than car):
linear works best if the prefix is long (few words to skip)
binary search works best for short prefix (many words to skip)
How long is "long" depends on your dictionary and you'll have to measure. I would go with the binary search to begin with.
Note: the length partitioning works against the prefix partitioning, but it prunes much more of the search space
4. Prefixes and re-use
Now, we'll also try to re-use the computation as much as possible (and not just the "it does not work" result)
Suppose that you have two words:
cartoon
carwash
You first compute the matrix, row by row, for cartoon. Then when reading carwash you need to determine the length of the common prefix (here car) and you can keep the first 4 rows of the matrix (corresponding to void, c, a, r).
Therefore, when begin to computing carwash, you in fact begin iterating at w.
To do this, simply use an array allocated straight at the beginning of your search, and make it large enough to accommodate the larger reference (you should know what is the largest length in your dictionary).
5. Using a "better" data structure
To have an easier time working with prefixes, you could use a Trie or a Patricia Tree to store the dictionary. However it's not a STL data structure and you would need to augment it to store in each subtree the range of words length that are stored so you'll have to make your own implementation. It's not as easy as it seems because there are memory explosion issues which can kill locality.
This is a last resort option. It's costly to implement.
You should have a look at this explanation of Peter Norvig on how to write a spelling corrector .
How to write a spelling corrector
EveryThing is well explain in this article, as an example the python code for the spell checker looks like this :
import re, collections
def words(text): return re.findall('[a-z]+', text.lower())
def train(features):
model = collections.defaultdict(lambda: 1)
for f in features:
model[f] += 1
return model
NWORDS = train(words(file('big.txt').read()))
alphabet = 'abcdefghijklmnopqrstuvwxyz'
def edits1(word):
splits = [(word[:i], word[i:]) for i in range(len(word) + 1)]
deletes = [a + b[1:] for a, b in splits if b]
transposes = [a + b[1] + b[0] + b[2:] for a, b in splits if len(b)>1]
replaces = [a + c + b[1:] for a, b in splits for c in alphabet if b]
inserts = [a + c + b for a, b in splits for c in alphabet]
return set(deletes + transposes + replaces + inserts)
def known_edits2(word):
return set(e2 for e1 in edits1(word) for e2 in edits1(e1) if e2 in NWORDS)
def known(words): return set(w for w in words if w in NWORDS)
def correct(word):
candidates = known([word]) or known(edits1(word)) or known_edits2(word) or [word]
return max(candidates, key=NWORDS.get)
Hope you can find what you need on Peter Norvig website.
for spell checker many data structures would be useful for example BK-Tree. Check Damn Cool Algorithms, Part 1: BK-Trees I have done implementation for the same here
My Earlier code link may be misleading, this one is correct for spelling corrector.
off the top of my head, you could split up your suggestions based on length and build a tree structure where children are longer variations of the shorter parent.
should be quite fast but i'm not sure about the code itself, i'm not very well-versed in c++