C++ Business rule expression parser/evaluation [closed] - c++

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I'm looking for suggestions of portable lightweight libraries written in C++, that support mathematical and business rule expression and evaluation. I understand C++ doesn't provide such functionality in the STL.
The basic requirement is as follows:
The expressions to be evaluated will be comprised of numbers and strings and variables either representing numbers or strings.
Some of the expressions are expected to be evaluated many times per second (1000-2000 times), hence there is a requirement for high performance evaluations of the expressions.
Originally the project at my company, we encode all the business rules as classes that derived from a base expression class. The problem is that this approach does not scale well as the number of expressions increases.
I've googled around, but most "libraries" I could find are pretty much simple examples of the shunting yard algorithm, most of the expression parsers, perform parsing and evaluation in the same step making them unsuitable for continuous reevaluations, and most only support numbers.
What I'm looking for:
Library written in C++ (C++03 or C++11)
Stable/production worthy
Fast evaluations
Portable (win32/linux)
Any suggestions for building high performance business rules engine.
Example business rule:
'rule_result = (remaining_items < min_items) and (item == "beach ball")'

See the C++ Mathematical Expression Library outlined in this answer.
But, if you really want speed, consider compiling the expressions as C/C++ directly, then load them dynamically (shared objects/DLLs).

Have you considered generating your own parser with Bison + Flex? It uses a FSM-based LALR parser implementation that is fast and is easy to write, and supports evaluation of expressions while you're parsing them, as well as AST generation for repeated evaluation.

Related

regex worst possible complexity [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I'm aware regular expressions, as a concept, represent a single Regular Language, which can be processed via a DFA/NFA with O(n) ~O(2^m) complexity, being n the size of the string and m the size of the regex. Most stack-overflow discussions about the subject quote this awesome article that proves it.
However, regex implemented in modern languages deal with more than regular languages. For instance, it's possible to recognize palindromes with regex, a context-free grammar problem that, when solved via a push-down automata(PDA), is known to have a O(n³) complexity.
I would like to know were exactly in the Extended Chomsky Hierarchy modern (perl or python re, for example) regex implementations fit, or, at least, their worst possible complexity.

How to create a list containing the LOC for each function (C++) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I want to make sure that the function body for each function fits on the screen. Therefore I want to to generate a list that contains the LOC for each function (in a .cpp/.h - file, or better in all source code files in a directory). For example the list could be an CSV-file containing
foo.cpp,foobar,42
foo.cpp,foozar,13
goo.cpp,bla,666
if the file foo.cpp contains a function foobar which has a body of 42 lines, etc...
Can you recommend/suggest any tool?
The issue is simple. If you want accurate data no matter what C++ you encounter, you will need a full C++ parser with preprocessor capability. That's very hard to build due to the complex nature of C++ (now C++11 is pretty standard and C++14 is not far behind). Pretty much your choices are limited to:
Edison Design Group C++ front end
Clang
GCC
Our DMS Software Reengineering Toolkit with its C++ front end
Elsa (if it still maintained)
These are big, complex engines and take effort to configure for your task (esp. GCC, which wants to be a compiler no matter what you want it to be). An additional complication that may or may not matter to you: Clang, GCC and Elsa don't handle the MS dialects, if I understand them correctly.
If you don't care if you get the right answer all the time, you could build a very simple scanner to look for apparant function heads, counting { ... } and ( ... ) to make sure you know where the function body terminates. You'll probably have to recognize namespace and class constructs, to know to look inside them for function declarations. This seems like the easiset solution, thus fastest in time and least effort.
You can write a plugin for clang. It is well suited for these kinds of extensions.

NFA DFA and Regex to Transition Table [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I have been looking out for some algorithm that has in input a regular expression or a string and that converts it into an NFA and then a DFA, and that would actually print out the transition table of the corresponding final DFA.
I'am thus wondering if there is already an algorithm or C or Python library that does that,or if you have suggestions of algorithms to use, that I could implement.
Thank you.
I'm not certain if either of these links might help you.
The first provide a very simple NFA/DFA implementation in Python, with conversion from NFA to DFA. It doesn't generate the NFA from a regex though, but it is not so difficult to do. The second site provides a long discussion on NFA vs DFA, including numerous code examples (mostly in C) and links to external libraries that I know little of. The third and fourth links provide the source code of two regex engines implementation developped by the second article's author, including parsing from regex to NFA, then conversion from NFA to DFA. Note however that I haven't had a look at either of these projects.
https://gist.github.com/Arachnid/491973
http://swtch.com/~rsc/regexp/
https://code.google.com/p/re1/source/browse/
https://code.google.com/p/re2/source/browse/
Otherwise, I would mention that most real world regex engines use NFA, not DFA, because of some extended features that simply can't be performed with a DFA. So if none of hte links above can help you, then you might have some luck looking at compiler-compilers, since they are the ones that actually use DFAs.

Library to check if two regular expressions are equal/isomorphic [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I need a library which will take in two regular expressions and determine whether they are isomorphic (i.e. match exactly the same set of strings or not)
For example a|b is isomorphic to [ab]
As I understand it, a regular expression can be converted to an NFA which in some cases can be efficiently converted to a DFA. The DFA can then be converted to a minimal DFA, which, if I understand it correctly, is unique and so these minimal DFA's can then be compared for equality. I realize that not all regular expression NFA's can be efficently transformed into DFA's (especially when they were generate from Perl Regexps which are not truly "regular") in which case ideally the library would just return an error or some other indication that the conversion is not possible.
I see tons of articles and academic papers on-line about doing this (and even some programming assignments for classes asking students to do this) but I can't seem to find a library which implements this functionality. I would prefer a Python and/or C/C++ library, but a library in any language will do. Does anyone know if such a library? If not, does someone know of a library that gets close that I can use as a starting point?
Haven't tried it, but Regexp:Compare for Perl looks promising: two regex's are equivalent if the language of the first is a subset of the second, and vice verse.
The brics automaton library for Java supports this.
It can be used to convert regular expressions to minimal Deterministic Finite State Automata, and check if these are equivalent:
public static void isIsomorphic(String regexA, String regexB) {
Automaton a = new RegExp(regexA).toAutomaton();
Automaton b = new RegExp(regexB).toAutomaton();
return a.equals(b);
}
Note that this library only works for regular expressions that describe a regular language: it does not support some more advanced features, such as backreferences.

BNF grammar test case generation [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Does anyone have any experience with a tool that generates test strings from a BNF grammar that could then be fed into a unit test?
I don't have an answer to the tool question, but I will say it is fairly easy in any text processing language (perl/python/etc) to randomly generate sentences from a BNF grammar, and slightly more verbose in a bigger language (Java/C/etc), but it shouldn't be too hard to roll your own.
The problem with this, of course, is that it can only generate strings in the grammar, and unless your grammar is very simple, the test space is infinitely large.
I've done exactly as hazzen commented (using an embedded DSL in a scripting language). It was a mildly interesting exercise, but except for the most basic tests of e.g. parsing, it wasn't terribly useful. Most of my most interesting tests have to do with more sophisticated relationships than one can easily express in BNF (or any other context-free grammar).
If, say, you're developing a compiler, then you likely have an abstract syntax tree datatype. If so, then you could write a function to generate an random AST -- with that, you can print it to a string and feed that to your unit test. It's guaranteed to be a valid program this way, since you started with your AST.
If I were writing a compiler in Haskell or ML, this is what I would do, using QuickCheck.
Gramtest is one such tool that can generate strings from arbitrary user defined BNF grammars. You can read more details about the algorithm behind Gramtest here and some practical tips on the tool are available here.