Regular expression for context free grammar - regex

I have this context free grammar :
S -> aSb
S -> aSa
S -> bSa
S -> bSb
S -> epsilon
I want to show that this grammar describes a regular language ( namely can be represented as a regular expression) but I'm not sure how to do that and get the confident I'm not missing any pattern.
I did not see this exact question and that why I don't think it is duplicate. I'd like an explanation on this relative simple example. It was hard for me to follow more complicated examples.

You must build a DFA or regular expression. The DFA will have 2 states in this case, I think. q1(even) move to q2(odd) after a, b and from q2 move to q1 after a, b. The start and accepting state is q1.

Related

Converting a regular expression to a DFA

I've been trying to convert the regular expression
to a non-deterministic finite automata (NFA) first using Thompson's construction, giving:
, which looks correct.
I am then using subset construction to create the DFA from the NFA, shown here.
But this does not look correct to me, as for example 0 followed by 0 is not valid according to the DFA I have constructed. I was wondering how should I be modelling the epsilon in the original regular expression, as I have simply treated it as a normal epsilon.
A single matches already - thus state B in your diagram for an dfa has to be already an accepting state (just mark it as accepting would not fix it!). I can't reconstruct your ideas, but you should end in something like
A --0--> (B)
A --1--> C
B --0--> (B)
B --1--> C
C --0--> (B)
C --1--> C
If you'll start to think from the other side, the regex (0|)(0|1)*0 could be simplified by removing the first part (0|). This could be done because (0|1) matches everything that (0|) will match (and more).
You can see it in your diagram for the nfa, too.
S0->S1->S2->S5 === S7->S7->S8->S11
S0->S3->S4->S5 === epsilon
Maybe this could help to get your thoughts in the right direction..

What is the formal grammar equivalent to this regex?

The regular expression is (a+)+ . Using an NFA this would give reDOS attacks for longer strings . What would be the equivalent grammar for this regular expression ?
Now i was trying to determine the grammar in multiple steps .
a+ would translate to
S -> a
S -> aS
(a+)+ would translate to
G -> S
G -> SG
I was not sure how to simplify further whether it would be CFG or CSG ? Any suggestions would be of great help
(a+)+ is equivalent to a+. A possible grammar is
S → A
A → a
A → aA
The grammar is regular (such one must exist, because it's derived from a regular expression). It's therefore also context-free and therefore also context-sensitive.

Does lookaround affect which languages can be matched by regular expressions?

There are some features in modern regex engines which allow you to match languages that couldn't be matched without that feature. For example the following regex using back references matches the language of all strings that consist of a word that repeats itself: (.+)\1. This language is not regular and can't be matched by a regex that does not use back references.
Does lookaround also affect which languages can be matched by a regular expression? I.e. are there any languages that can be matched using lookaround that couldn't be matched otherwise? If so, is this true for all flavors of lookaround (negative or positive lookahead or lookbehind) or just for some of them?
The answer to the question you ask, which is whether a larger class of languages than the regular languages can be recognised with regular expressions augmented by lookaround, is no.
A proof is relatively straightforward, but an algorithm to translate a regular expression containing lookarounds into one without is messy.
First: note that you can always negate a regular expression (over a finite alphabet). Given a finite state automaton that recognises the language generated by the expression, you can simply exchange all the accepting states for non-accepting states to get an FSA that recognises exactly the negation of that language, for which there are a family of equivalent regular expressions.
Second: because regular languages (and hence regular expressions) are closed under negation they are also closed under intersection since A intersect B = neg ( neg(A) union neg(B)) by de Morgan's laws. In other words given two regular expressions, you can find another regular expression that matches both.
This allows you to simulate lookaround expressions. For example u(?=v)w matches only expressions that will match uv and uw.
For negative lookahead you need the regular expression equivalent of the set theoretic A\B, which is just A intersect (neg B) or equivalently neg (neg(A) union B). Thus for any regular expressions r and s you can find a regular expression r-s which matches those expressions that match r which do not match s. In negative lookahead terms: u(?!v)w matches only those expressions which match uw - uv.
There are two reasons why lookaround is useful.
First, because the negation of a regular expression can result in something much less tidy. For example q(?!u)=q($|[^u]).
Second, regular expressions do more than match expressions, they also consume characters from a string - or at least that's how we like to think about them. For example in python I care about the .start() and .end(), thus of course:
>>> re.search('q($|[^u])', 'Iraq!').end()
5
>>> re.search('q(?!u)', 'Iraq!').end()
4
Third, and I think this is a pretty important reason, negation of regular expressions does not lift nicely over concatenation. neg(a)neg(b) is not the same thing as neg(ab), which means that you cannot translate a lookaround out of the context in which you find it - you have to process the whole string. I guess that makes it unpleasant for people to work with and breaks people's intuitions about regular expressions.
I hope I have answered your theoretical question (its late at night, so forgive me if I am unclear). I agree with a commentator who said that this does have practical applications. I met very much the same problem when trying to scrape some very complicated web pages.
EDIT
My apologies for not being clearer: I do not believe you can give a proof of regularity of regular expressions + lookarounds by structural induction, my u(?!v)w example was meant to be just that, an example, and an easy one at that. The reason a structural induction won't work is because lookarounds behave in a non-compositional way - the point I was trying to make about negations above. I suspect any direct formal proof is going to have lots of messy details. I have tried to think of an easy way to show it but cannot come up with one off the top of my head.
To illustrate using Josh's first example of ^([^a]|(?=..b))*$ this is equivalent to a 7 state DFSA with all states accepting:
A - (a) -> B - (a) -> C --- (a) --------> D
Λ | \ |
| (not a) \ (b)
| | \ |
| v \ v
(b) E - (a) -> F \-(not(a)--> G
| <- (b) - / |
| | |
| (not a) |
| | |
| v |
\--------- H <-------------------(b)-----/
The regular expression for state A alone looks like:
^(a([^a](ab)*[^a]|a(ab|[^a])*b)b)*$
In other words any regular expression you are going to get by eliminating lookarounds will in general be much longer and much messier.
To respond to Josh's comment - yes I do think the most direct way to prove the equivalence is via the FSA. What makes this messier is that the usual way to construct an FSA is via a non-deterministic machine - its much easier to express u|v as simply the machine constructed from machines for u and v with an epsilon transition to the two of them. Of course this is equivalent to a deterministic machine, but at the risk of exponential blow-up of states. Whereas negation is much easier to do via a deterministic machine.
The general proof will involve taking the cartesian product of two machines and selecting those states you wish to retain at each point you want to insert a lookaround. The example above illustrates what I mean to some extent.
My apologies for not supplying a construction.
FURTHER EDIT:
I have found a blog post which describes an algorithm for generating a DFA out of a regular expression augmented with lookarounds. Its neat because the author extends the idea of an NFA-e with "tagged epsilon transitions" in the obvious way, and then explains how to convert such an automaton into a DFA.
I thought something like that would be a way to do it, but I'm pleased that someone has written it up. It was beyond me to come up with something so neat.
As the other answers claim, lookarounds don't add any extra power to regular expressions.
I think we can show this using the following:
One Pebble 2-NFA (see the Introduction section of the paper which refers to it).
The 1-pebble 2NFA does not deal with nested lookaheads, but, we can use a variant of multi-pebble 2NFAs (see section below).
Introduction
A 2-NFA is a non deterministic finite automaton which has the ability to move either left or right on it's input.
A one pebble machine is where the machine can place a pebble on the input tape (i.e. mark a specific input symbol with a pebble) and do possibly different transitions based on whether there is a pebble at the current input position or not.
It is known the One Pebble 2-NFA has the same power as a regular DFA.
Non-nested Lookaheads
The basic idea is as follows:
The 2NFA allows us to backtrack (or 'front track') by moving forward or backward in the input tape. So for a lookahead we can do the match for the lookahead regular expression and then backtrack what we have consumed, in matching the lookahead expression. In order to know exactly when to stop backtracking, we use the pebble! We drop the pebble before we enter the dfa for the lookahead to mark the spot where the backtracking needs to stop.
Thus at the end of running our string through the pebble 2NFA, we know whether we matched the lookahead expression or not and the input left (i.e. what is left to be consumed) is exactly what is required to match the remaining.
So for a lookahead of the form u(?=v)w
We have the DFAs for u, v and w.
From the accepting state (yes, we can assume there is only one) of DFA for u, we make an e-transition to the start state of v, marking the input with a pebble.
From an accepting state for v, we e-transtion to a state which keeps moving the input left, till it finds a pebble, and then transitions to start state of w.
From a rejecting state of v, we e-transition to a state which keeps moving left until it finds the pebble, and transtions to the accepting state of u (i.e where we left off).
The proof used for regular NFAs to show r1 | r2, or r* etc, carry over for these one pebble 2nfas. See http://www.coli.uni-saarland.de/projects/milca/courses/coal/html/node41.html#regularlanguages.sec.regexptofsa for more info on how the component machines are put together to give the bigger machine for the r* expression etc.
The reason why the above proofs for r* etc work is that the backtracking ensures that the input pointer is always at the right spot, when we enter the component nfas for repetition. Also, if a pebble is in use, then it is being processed by one of the lookahead component machines. Since there are no transitions from lookahead machine to lookahead machine without completely backtracking and getting back the pebble, a one pebble machine is all that is needed.
For eg consider ([^a] | a(?=...b))*
and the string abbb.
We have abbb which goes through the peb2nfa for a(?=...b), at the end of which we are at the state: (bbb, matched) (i.e in input bbb is remaining, and it has matched 'a' followed by '..b'). Now because of the *, we go back to the beginning (see the construction in the link above), and enter the dfa for [^a]. Match b, go back to beginning, enter [^a] again two times, and then accept.
Dealing with Nested Lookaheads
To handle nested lookaheads we can use a restricted version of k-pebble 2NFA as defined here: Complexity Results for Two-Way and Multi-Pebble Automata and their Logics (see Definition 4.1 and Theorem 4.2).
In general, 2 pebble automata can accept non-regular sets, but with the following restrictions, k-pebble automata can be shown to be regular (Theorem 4.2 in above paper).
If the pebbles are P_1, P_2, ..., P_K
P_{i+1} may not be placed unless P_i is already on the tape and P_{i} may not be picked up unless P_{i+1} is not on the tape. Basically the pebbles need to be used in a LIFO fashion.
Between the time P_{i+1} is placed and the time that either P_{i} is picked up or P_{i+2} is placed, the automaton can traverse only the subword located between the current location of P_{i} and the end of the input word that lies in the direction of P_{i+1}. Moreover, in this sub-word, the automaton can act only as a 1-pebble automaton with Pebble P_{i+1}. In particular it is not allowed to lift up, place or even sense the presence of another pebble.
So if v is a nested lookahead expression of depth k, then (?=v) is a nested lookahead expression of depth k+1. When we enter a lookahead machine within, we know exactly how many pebbles have to have been placed so far and so can exactly determine which pebble to place and when we exit that machine, we know which pebble to lift. All machines at depth t are entered by placing pebble t and exited (i.e. we return to processing of a depth t-1 machine) by removing pebble t. Any run of the complete machine looks like a recursive dfs call of a tree and the above two restrictions of the multi-pebble machine can be catered to.
Now when you combine expressions, for rr1, since you concat, the pebble numbers of r1 must be incremented by the depth of r. For r* and r|r1 the pebble numbering remains the same.
Thus any expression with lookaheads can be converted to an equivalent multi-pebble machine with the above restrictions in pebble placement and so is regular.
Conclusion
This basically addresses the drawback in Francis's original proof: being able to prevent the lookahead expressions from consuming anything which are required for future matches.
Since Lookbehinds are just finite string (not really regexs) we can deal with them first, and then deal with the lookaheads.
Sorry for the incomplete writeup, but a complete proof would involve drawing a lot of figures.
It looks right to me, but I will be glad to know of any mistakes (which I seem to be fond of :-)).
I agree with the other posts that lookaround is regular (meaning that it does not add any fundamental capability to regular expressions), but I have an argument for it that is simpler IMO than the other ones I have seen.
I will show that lookaround is regular by providing a DFA construction. A language is regular if and only if it has a DFA that recognizes it. Note that Perl doesn't actually use DFAs internally (see this paper for details: http://swtch.com/~rsc/regexp/regexp1.html) but we construct a DFA for purposes of the proof.
The traditional way of constructing a DFA for a regular expression is to first build an NFA using Thompson's Algorithm. Given two regular expressions fragments r1 and r2, Thompson's Algorithm provides constructions for concatenation (r1r2), alternation (r1|r2), and repetition (r1*) of regular expressions. This allows you to build a NFA bit by bit that recognizes the original regular expression. See the paper above for more details.
To show that positive and negative lookahead are regular, I will provide a construction for concatenation of a regular expression u with positive or negative lookahead: (?=v) or (?!v). Only concatenation requires special treatment; the usual alternation and repetition constructions work fine.
The construction is for both u(?=v) and u(?!v) is:
In other words, connect every final state of the existing NFA for u to both an accept state and to an NFA for v, but modified as follows. The function f(v) is defined as:
Let aa(v) be a function on an NFA v that changes every accept state into an "anti-accept state". An anti-accept state is defined to be a state that causes the match to fail if any path through the NFA ends in this state for a given string s, even if a different path through v for s ends in an accept state.
Let loop(v) be a function on an NFA v that adds a self-transition on any accept state. In other words, once a path leads to an accept state, that path can stay in the accept state forever no matter what input follows.
For negative lookahead, f(v) = aa(loop(v)).
For positive lookahead, f(v) = aa(neg(v)).
To provide an intuitive example for why this works, I will use the regex (b|a(?:.b))+, which is a slightly simplified version of the regex I proposed in the comments of Francis's proof. If we use my construction along with the traditional Thompson constructions, we end up with:
The es are epsilon transitions (transitions that can be taken without consuming any input) and the anti-accept states are labeled with an X. In the left half of the graph you see the representation of (a|b)+: any a or b puts the graph in an accept state, but also allows a transition back to the begin state so we can do it again. But note that every time we match an a we also enter the right half of the graph, where we are in anti-accept states until we match "any" followed by a b.
This is not a traditional NFA because traditional NFAs don't have anti-accept states. However we can use the traditional NFA->DFA algorithm to convert this into a traditional DFA. The algorithm works like usual, where we simulate multiple runs of the NFA by making our DFA states correspond to subsets of the NFA states we could possibly be in. The one twist is that we slightly augment the rule for deciding if a DFA state is an accept (final) state or not. In the traditional algorithm a DFA state is an accept state if any of the NFA states was an accept state. We modify this to say that a DFA state is an accept state if and only if:
= 1 NFA states is an accept state, and
0 NFA states are anti-accept states.
This algorithm will give us a DFA that recognizes the regular expression with lookahead. Ergo, lookahead is regular. Note that lookbehind requires a separate proof.
I have a feeling that there are two distinct questions being asked here:
Are Regex engines that encorporate "lookaround" more
powerful than Regex engines that don't?
Does "lookaround"
empower a Regex engine with the ability to parse languages that are
more complex than those generated from a Chomsky Type 3 - Regular grammar?
The answer to the first question in a practical sense is yes. Lookaround will give a Regex engine that
uses this feature fundamentally more power than one that doesn't. This is because
it provides a richer set of "anchors" for the matching process.
Lookaround lets you define an entire Regex as a possible anchor point (zero width assertion). You can
get a pretty good overview of the power of this feature here.
Lookaround, although powerful, does not lift the Regex engine beyond the theoretical
limits placed on it by a Type 3 Grammar. For example, you will never be able to reliably
parse a language based on a Context Free - Type 2 Grammar using a Regex engine
equipped with lookaround. Regex engines are limited to the power of a Finite State Automation
and this fundamentally restricts the expressiveness of any language they can parse to the level of a Type 3 Grammar. No matter
how many "tricks" are added to your Regex engine, languages generated via a Context Free Grammar
will always remain beyond its capabilities. Parsing Context Free - Type 2 grammar requires pushdown automation to "remember" where it is in
a recursive language construct. Anything that requires a recursive evaluation of the grammar rules cannot be parsed using
Regex engines.
To summarize: Lookaround provides some practical benefits to Regex engines but does not "alter the game" on a
theoretical level.
EDIT
Is there some grammar with a complexity somewhere between Type 3 (Regular) and Type 2 (Context Free)?
I believe the answer is no. The reason is because there is no theoretical limit
placed on the size of the NFA/DFA needed to describe a Regular language. It may become arbitrarily large
and therefore impractical to use (or specify). This is where dodges such as "lookaround" are useful. They
provide a short-hand mechanism to specify what would otherwise lead to very large/complex NFA/DFA
specifications. They do not increase the expressiveness of
Regular languages, they just make specifying them more practical. Once you get this point, it becomes
clear that there are a lot of "features" that could be added to Regex engines to make them more
useful in a practical sense - but nothing will make them capable of going beyond the
limits of a Regular language.
The basic difference between a Regular and a Context Free language is that a Regular language
does not contain recursive elements. In order to evaluate a recursive language you need a
Push Down Automation
to "remember" where you are in the recursion. An NFA/DFA does not stack state information so cannot
handle the recursion. So given a non-recursive language definition there will be some NFA/DFA (but
not necessarily a practical Regex expression) to describe it.

Is this context-free grammar a regular expression?

I have a grammar defined as follows:
A -> aA*b | empty_string
Is A a regular expression? I'm confused as to how to interpret a BNF grammar.
No, this question doesn't actually have to do with regular expressions. Context-free grammars specify languages that can't be described by regular expressions.
Here, A is a non-terminal; that is, it's a symbol that must be expanded by a production rule. Given that you only have one rule, it is also your start symbol - any production in this grammar must start with an A.
The production rule is
(1) A -> aA*b |
(2) empty_string
a and b are terminal symbols - they are in the alphabet of the language, and cannot be expanded. When you have no more nonterminals on the left-hand side, you are done.
So this language specifies words that are like balanced parentheses, except with a and b instead of ( and ).
For instance, you could produce ab as follows:
A -> aA*b (using 1)
aAb -> ab (using 2)
Similarly, you could produce aabb:
A -> aA*b (1)
aAb -> aaA*bb (1)
aaAbb -> aabb (2)
Or even aababb:
A -> aA*b
aA*b -> aabA*b:
aaba*b -> aababA*b:
aababA*b: -> aababb
Get it? The star symbol may be a bit confusing because you have seen it in regular expressions, but actually it does the same thing here as there. It is called a Kleene-closure and it represents all words you can make with 0 or more As.
Regular Expressions generate Regular languages and can be parsed with State Machines.
BNF grammars are Context Free Grammars which generate Context Free languages and can be be parsed with Push Down Automata (stack machines)
Context Free Grammars can do everything Regular Grammars can and more.
A appears to be a BNF grammar rule. I'm not really sure why you have this confused with a regular expression. Are you confused because it has a * in it? Everything that has a * isn't a regular expression.

Regular expressions Equivalence

Is there a way to find out if two arbitrary regular expressions are equivalent? Looks like complex problem to me, but there might be some DFA simplification mechanism or something?
To test equivalence you can compute the minimal DFAs for the expressions and compare them.
Testability of equality is one of the classical properties of regular expressions. (N.B. This doesn't hold if you're really talking about Perl regular expressions or some other technically nonregular superlanguage.)
Turn your REs to generalised finite automata A and B, then construct a new automaton A-B such that the accepting states of A have null transitions to the start states of B, and that the accepting states of B are inverted. This gives you an automaton that accepts all those strings accepted by A, except for all those accepted by B.
Do the same for B-A, and reduce both to pure FAs. If an FA has no accepting states accessible from a start state then it accepts the empty language. If you can show that both A-B and B-A are empty, you've shown that A = B.
Edit Heh, I can't believe no one noticed the gigantic error there -- an intentional one, of course :-p
The automata A-B as described will accept those strings whose first half is accepted by A and whose second half is not accepted by B. Building the desired A-B is a slightly trickier process. I can't think of it off the top of my head, but I do know it's well-defined (and likely involves creating states to the represent the products of accepting states in A and non-accepting states in B).
This really depends on what you mean by regular expressions. As the other posters pointed out, reducing both expressions to their minimal DFA should work, but it only works for the pure regular expressions.
Some of the constructs used in the real world regex libs (backreferences in particular) give them power to express languages that aren't regular, so the DFA algorithm won't work for them. For example the regex : ([a-z]*) \1 matches a double occurence of the same word separated by a space (a a and b b but not b a nor a b). This cannot be recognized by a finite automaton at all.
These two Perlmonks threads discuss this question (specifically, read blokhead's responses):
Comparative satisfiability of regexps
Testing regex equivalence