'javadoc' look-a-like, using parser generator? - templates

I'm going to create a javadoc look-a-like for the language I'm mainly using, but I was wondering - is it worth to use a parser generator for this? The main idea to use a parser generator was because I could use templates for the HTML code which could be exported then. Also I could also use PDF templates if I need it.
Thanks,
William v. Doorn

If all you are going to do is extract the "Javadoc" comments, you don't need a full parser; after all, you only need to recognize the comments and regexps will likely do fine.
If you want to extract information from the code and use it augment the javadoc comments, you'll need not only a parser but also name and type resolution.
You can see the results of combining parsing, name/type resolution, and Javadoc comment extraction in the Java Source Code Browser, which produces Javadoc results along with fully hyperlinked source code cross-referenced into the Javadocs.
The machinery which produced this is a generalization of something like ANTLR. But there was little need of using code templates to produce the HTML itself; all the hard work is in parsing and fact collection across the symbol tables.

Related

How should I document a Lua API/object model written in C++ code?

I am working on documenting a new and expanded Lua API for the game Bitfighter (http://bitfighter.org). Our Lua object model is a subset of the C++ object model, and the methods exposed to Lua that I need to document are a subset of the methods available in C++. I want to document only the items relevant to Lua, and ignore the rest.
For example, the object BfObject is the root of all the Lua objects, but is itself in the middle of the C++ object tree. BfObject has about 40 C++ methods, of which about 10 are relevant to Lua scripters. I wish to have our documentation show BfObject as the root object, and show only those 10 relevant methods. We would also need to show its children objects in a way that made the inheritance of methods clear.
For the moment we can assume that all the code is written in C++.
One idea would be to somehow mark the objects we want to document in a way that a system such as doxygen would know what to look at and ignore the rest. Another would be to preprocess the C++ code in such a way as to delete all the non-relevant bits, and document what remains with something like doxygen. (I actually got pretty far with this approach using luadoc, but could not find a way to make luadoc show object hierarchy.)
One thing that might prove helpful is that every Lua object class is registered in a consistent manner, along with its parent class.
There are a growing number of games out there that use Lua for scripting, and many of them have decent documentation. Does anyone have a good suggestion on how to produce it?
PS To clarify, I'm happy to use any tool that will do the job -- doxygen and luadoc are just examples that I am somewhat familiar with.
I have found a solution, which, while not ideal, works pretty well. I cobbled together a Perl script which rips through all the Bitfighter source code and produces a second set of "fake" source that contains only the elements I want. I can then run this secondary source through Doxygen and get a result that is 95% of what I'm looking for.
I'm declaring victory.
One advantage of this approach is that I can document the code in a "natural" way, and don't need to worry about marking what's in and what's out. The script is smart enough to figure it out from the code structure.
If anyone is interested, the Perl script is available in the Bitfighter source archive at https://code.google.com/p/bitfighter/source/browse/luadoc.pl. It is only about 80% complete, and is missing a few very important items (such as properly displaying function args), but the structure is there, and I am satisfied the process will work. The script will improve with time.
The (very preliminary) results of the process can be seen at http://bitfighter.org/luadocs/index.html. The templates have hardly been modified, so it has a very "stock" look, but it shows that things more-or-less work.
Since some commenters have suggested that it is impossible to generate good documentation with Doxygen, I should note that almost none of our inline docs have been added yet. To get a sense of what they will look like, see the Teleporter class. It's not super good, but I think it does refute the notion that Doxygen always produces useless docs.
My major regret at this point is that my solution is really a one-off and does not address what I think is a growing need in the community. Perhaps at some point we'll standardize on a way of merging C++ and Lua and the task of creating a generalized documentation tool will be more manageable.
PS You can see what the markup in the original source files looks like... see https://code.google.com/p/bitfighter/source/browse/zap/teleporter.cpp, and search for #luaclass
Exclude either by namespace (could be class as well) of your C++ code, but not the lua code
EXCLUDE_SYMBOLS = myhier_cpp::*
in the doxygen config file or cherry pick what to exclude by using
/// #cond
class aaa {
...
...
}
/// #endcond
in your c++ code.
I personally think that separating by namespace is better since it reflects the separation in code + documentation, which leads to a namespace based scheme for separation of pure c++ from lua bindings.
Separating via exclusion is probably the most targeted approach but that would involve an extra tool to parse the code, mark up relevant lua parts and add the exclusion to the code. (Additionally you could also render special info like graphs separately with this markup and add them via an Image to your documentation, at least that's easy to do with Doxygen.). Since there has to be some kind of indication of lua code, the markup is probably not too difficult to derive.
Another solution is to use LDoc. It also allows you to write C++ comments, which will be parsed by LDoc and included into the documentation.
An advantage is that you can just the same tool to document your lua code as well. A drawback is that the project seems to be unmaintained. It may also not be possible to document complex object hierarchies, like the questioner mentioned.
I forked it myself for some small adjustments regarding c++. Have a look here.

How do HTML parsers work?

I've seen the humorous threads and read the warnings, and I know that you don't parse HTML with regex. Don't worry... I'm not planning on trying it.
BUT... that leads me to ask: how are HTML parsers coded (including the built-in functions of programming languages, like DOM parsers and PHP's strip_tags)? What mechanism do they employ to parse the (sometimes malformed) markup?
I found the source of one coded in JavaScript, and it actually uses regex to do the job:
// Regular Expressions for parsing tags and attributes
var startTag = /^<(\w+)((?:\s+\w+(?:\s*=\s*(?:(?:"[^"]*")|(?:'[^']*')|[^>\s]+))?)*)\s*(\/?)>/,
endTag = /^<\/(\w+)[^>]*>/,
attr = /(\w+)(?:\s*=\s*(?:(?:"((?:\\.|[^"])*)")|(?:'((?:\\.|[^'])*)')|([^>\s]+)))?/g;
Do they all do this? Is there a conventional, standard way to code an HTML parser?
I do not know that that style is a “normal” way to do things. It is better than most I’ve seen, but it’s still too close to what I refer to as a “naïve” approach in this answer. For one thing, it isn’t accounting for HTML comments getting in the way of things. There are also legal but somewhat matters of entities it isn’t dealing with. But it’s HTML comments where most such approaches fall down.
A more natural way is to use a lexer to peel off tokens, more like like shown in this answer’s script, then assemble those meaningfully. The lexer would be able to know about the HTML comments easily enough.
You could approach this with a full grammar, such as the one shown here for parsing an RFC 5322 mail address. That is the sort of approach I take in the second, “wizardly” solution in this answer. But even that is only a complete grammar for well-formed HTML, and I’m only interested in a few different sort of tags. Those I define fully, but I don’t define valid fields for tags I’m unconcerned with.

Parse tree for SQL statements - precisely for "SELECT" statement

I am writing (hand written) recursive descent parser for SQL select statement in c++, i need to know whether the parse tree created by me is correct or not. I want to check but i didn't get a good sources for sql parse trees. My way of approach is - writing a function for each production and in that function the result is adding to the root tree. Can any one help me? Thanks in advance.
I don't know how you'll go about verifying your code is correct, but if you're concerned about your understanding of the SQL grammar, then here is a website that lists BNF grammars for various dialects of SQL. You ought to be able construct your parser in terms of these rules.
My company builds a lot of parsers, and have your same problem. We recently finished a SQL 2011 parser based on the draft standard.
Pretty much you decide if the parse tree is right by hand-inspecting it for many source code cases. This presumes that you can print the parse tree in a form that you can easily inspect; this is easily accomplished by a recursive tree walk of the parse tree. [You have to already believe that your abstract syntax tree nodes correctly model what you intend to capture!]. You choose the cases carefully to exercise different parts of the grammar (think "unit tests for grammars"). For a langauge as rich as SQL, this is a big job.
You also need to validate that the parser works in general, and you do that by feeding a lot of real code for the particular dialect of SQL you are handling. I typically try to find 100K-1M SLOC, and if the parser can't eat all of that, I have still have work left to do. Once you get to that level, you sort of consider that your parser is OK and treat further errors as "maintenance issues".
While the following may not help you directly, it might hint at a direction in which you could head. I use a somewhat different approach, based on having extremely strong parsing machinery available. Our tool, the DMS Software Reengineering Toolkit, given a grammar, will produce ASTs automatically, and has built-in facilities to print such parse trees (in one form as XML). The AST has sufficient information to regenerate ("prettyprint") the source text, and DMS has a built-in prettyprinter. So after hand inspecting a variety of cases, what I do is to take a large body of code, and for each file, parse it (getting no parse errors by virtue of the work done above), prettyprint the source, and reparse the source (expecting to get no errors). This is strong hint that we haven't lost anything in the round trip.
We have a new tool available, the Smart Differencer that compares the text of two programs to see if they are "the same" ignoring language layout rules. It works in essence by parsing two files and comparting their parse trees, ignoring the formatting (line/column/escapes/radix/comments/whitespace). What we are starting to do is to parse the source code, prettyprint it, and the smart-diffing the prettytprinted result against the original file. SmartDiff should say "no AST differences". This is a much stronger hint that we haven't lost anything. You can do pretty the much the same if you are willing to compare your before-and-after printed parse trees.
This parser, based on pyparsing, might be helpful as a second SELECT parsing resource (although it is in Python, not C++, sorry).

How do you implement syntax highlighting?

I am embarking on some learning and I want to write my own syntax highlighting for files in C++.
Can anyone give me ideas on how to go about doing this?
To me it seems that when a file is opened:
It would need to be parsed and decided what type of source file it is. Trusting the extension might not be fool-proof
A way to know what keywords/commands apply to what language
A way to decide what color each keyword/command gets
I want to do this on OS X, using C++ or Objective-C.
Can anyone provide pointers on how I might get started with this?
Syntax highlighters typically don't go beyond lexical analysis, which means you don't have to parse the whole language into statements and declarations and expressions and whatnot. You only have to write a lexer, which is fairly easy with regular expressions. I recommend you start by learning regular expressions, if you haven't already. It'll take all of 30 minutes.
You may want to consider toying with Flex ( the lexical analyzer generator; https://github.com/westes/flex ) as a learning exercise. It should be quite easy to implement a basic syntax highlighter in Flex that outputs highlighted HTML or something.
In short, you would give Flex a set of regular expressions and what to do with matching text, and the generator will greedily match against your expressions. You can make your lexer transition among exclusive states (e.g. in and out of string literals, comments, etc.) as shown in the flex FAQ. Here's a canonical example of a lexer for C written in Flex: http://www.lysator.liu.se/c/ANSI-C-grammar-l.html .
Making an extensible syntax highlighter would be the next part of your journey. Although I am by no means a fan of XML, take a look at how Kate syntax highlighting files are defined, such as this one for C++ . Your task would be to figure out how you want to define syntax highlighters, then make a program that uses those definitions to generate HTML or whatever you please.
You may want to look at how GeSHI implements highlighting, etc. In addition, it has a whole bunch of language packs that contain all the keywords you'll ever want.
Assuming that you are using Cocoa frameworks you can use UTIs to determine the file type.
For an overview of the api:
http://developer.apple.com/mac/library/documentation/FileManagement/Conceptual/understanding_utis/understand_utis_intro/understand_utis_intro.html#//apple_ref/doc/uid/TP40001319-CH201-SW1
For a list of known UTIs:
http://developer.apple.com/mac/library/documentation/Miscellaneous/Reference/UTIRef/Articles/System-DeclaredUniformTypeIdentifiers.html#//apple_ref/doc/uid/TP40009259-SW1
The two keys are you probably most interested in would be kUTTypeObjectiveC​PlusPlusSource and kUTTypeCPlusPlusHeader.
For the highlighting you might find the information on this page helpful as it discusses syntax highlighting with an NSView and temporary attributes:
http://www.cocoadev.com/index.pl?ImplementSyntaxHighlightingUsingTemporaryAttributes
I think (1) isn't possible, since the only way to tell if a file is valid C++ is to run it through a C++ parser and see if it parses... but if you used that as your standard, you couldn't operate on code that doesn't compile because it is a work-in-progress, which you probably want to do. It's probably best just to trust the extension, as I don't think any other method will work better than that.
You can get a list of C++ keywords here: http://www.cppreference.com/wiki/keywords/start
The colors are up to you (or if you want, you can make them configurable and leave the choice to the user)

XML Representation of C++ Objects

I'm trying to create a message validation program and would like to create easily modifiable rules that apply to certain message types. Due to the risk of the rules changing I've decided to define these validation rules external to the object code.
I've created a basic interface that defines a rule and am wondering what the best way to store this simple data would be. I was leaning towards XML but it seems like it might be too heavy.
Each rule would only need a very small set of data (i.e. type of rule, value, applicable mask, etc).
Does anyone know of a good resource that I could look at that would perform a similar functionality. I'd rather not dig too deep into XML on a problem that seems to barely need a subset of the functionality I see in most of the examples I bump into.
If I can find a concise example to examine I would be able to decide on whether or not to just go with a flat file.
Thanks in advance for your input!
Personally, for small, easily modifiable XML, I find TinyXML to be an excellent library. You can make each class understand it's own format, so your object hierarchy is represented directly in the XML.
However, if you don't think you need XML, you might want to go with a lighter storage like yaml. I find it is much easier to understand the underlying data, modify it and extend functionality.
(Also, boost::serialization has an XML archive, but it isn't what I'd call easily modifiable)
The simplest is to use a flat file designed to be easy to parse using the C++ >> operator. Just simple tokens separated by whitespace.
Well, if you want your rules to be human readable, XML is the way to go, and you can interface it nicely with c++ using xerces. If you want performance and or size, you could save the data as binaries using simple structs.
Another way to implement this would be to define your rules in XML Schema and then have an XML Data Binding tool generate the corresponding C++ object model along with the XML parsing and serialization code. One such tool (that I happen to be working on) is CodeSynthesis XSD:
http://www.codesynthesis.com/products/xsd/
For a 2-minutes overview of the idea, see the "Hello World" example in the C++/Tree mapping documentation.