Programmatically parse and edit C++ Source Files - c++

I want to programmatically parse and edit C++ source files. I need to change/add code in certain sections of code (i.e. in functions, class blocks, etc). I would also (preferably) be able to get comments as well.
Part of what I want to do can be explained by the following piece of code:
CPlusPlusSourceParser cp = new CPlusPlusSourceParser(“x.cpp”); // Create C++ Source Parser Object
CPlusPlusSourceFunction[] funcs = cp.getFunctions(); // Get all the functions
for (int i = 0; i &lt funcs.length; i++) { // Loop through all functions
funcs[i].append(/* … code I want to append …*/); // Append some code to function
}
cp.save(); // Save new source
cp.close(); // Close file
How can I do that?
I’d like to be able to do this preferably in Java, C++, Perl, Python or C#. However, I am open to other language API’s.

This is similar to AST from C code
If your comfortable with Java antlr can easily parser your code into an abstract syntax tree, and then apply transformation to that tree. A default AST transform is to simply print out the original source.

You can use any parser generator tool to generate a c++ parser for you, but first you have to get the CFG (context free grammar) for C++ , check Antlr
Edit:
Also Antlr supports a lot of target languages

You need a working grammar and parser for C++ which is, however, not too easy as this can't be constructed with most parser generators out there. But once you have a parser you can actually take the abstract syntax tree of the program and alter it in nearly any way you want.

The Mozilla project has a tool that does this.
The Clang static analyzer is now somewhat famous for doing a good job analyzing and rewriting C++. Stroustrup wrote a paper about a research project at Texas A&M, but I don't think it's been released.

A robust C++ parser is available with our DMS Software Reengineering Toolkit. It parses a variety of C++ dialects including ANSI, GNU 3/4, MSVS6 and MSVisual Studio 2005 and managaged C++.
It builds ASTs and symbol tables (the latter is way harder than you might think). You can navigate the ASTs, transform into different valid C++ programs, and regenerate code including comments.

In a C# -- or general .net -- approach, you might be able to get some use out of the C++/CLI CodeDOM provider -- having not used the C++ version of this type, I don't know how well it would handle code that is template heavy.

have a look at the doxygen project, its a open source project, to parse and document several programming languages, C++ included. I believe using this project's lexer will get you more than half the way

Related

What's the easiest way to parse C++ for code generation?

I would like to generate some wrapper code based on C++ types. I basically would like to parse some C++ headers, get the types, classes and their fields defined in the headers, and generate some code based on them.
What would be the easiest way to parse C++ and get type information? I thought about using the Clang C++ parser, but I couldn't make a working hello world in a couple of hours, so I gave up for the time being.
Could you advise any other way to parse C++, or if Clang is the easiest solution, could you point me to a simple getting started guide to be able to parse C++ types with it?
(basically any technology would be ok, C++, Java, C#, etc., this would be part of a command line tool)
Clang is definitely the easiest option. Consider using cindex python bindings, it's pretty straightforward. Alternatively, you could get an older version of clang which still features an xml backend.
EDIT: the link above seems to be down, so here is a link to the google cache of it.
Another link suggested in the comments: http://www.altdevblogaday.com/2014/03/05/implementing-a-code-generator-with-libclang/
Unless your object is to verify correctness, or the code involves advanced template stuff, consider using the XML output of DOxygen or GCC_XML. Alternatively, consider clang, even if that's what you found too complex. Note that for clang it might be best to work in *nix-land.
If your generation tool is in Java, consider using the parser from the Eclipse CDT.
my set of dependencies are:
com.ibm.icu_4.4.2.v20110823.jar
org.eclipse.cdt.core_5.3.2.201202111925.jar
org.eclipse.equinox.common_3.6.0.v20110523.jar
(these are from an old Eclipse version, because I have a dependency on old java class versions), but taking from the latest CDT wil do.
parsing involves:
FileContent reader;
reader = FileContent.createForExternalFileLocation(fullPath);
IScannerInfo info = new ScannerInfo(definedSymbols, includePaths);
return GPPLanguage.getDefault().getASTTranslationUnit(reader, info, FilesProvider.getInstance(), null, 0,log);
This returns an IASTTranslationUnit that can be accessed through a Visitor pattern (ASTVisitor).
I cannot comment on the accuracy of the parsing in corner scenarios, because so far I've been generating code based on simple C++ structure definitions.

parser generator that generates stand-alone C++ code

Is there a LALR parser generator that produces stand-alone C++ code? I am hoping that it would generate two files named something like "Parser.cpp" and "Parser.hpp," and the generated parser is implemented in a single class (that I can wrap in whatever namespace) that I can use for my parsing needs.
I want to use it for fun (i.e. small personal projects), and I'd like the output to be stand-alone (without any headers) so that I know I can compile it wherever I have a C++ compiler.
The search so far:
I've looked at flex/bison, but AFAIK they both require special headers and libraries. I've also looked at ANTLR a little bit, but it is not obvious to me that it can generate stand-alone C++ code. If someone can confirm that it can, then I might look more into it.
GOLD Parser (Bart Kiers mentioned the list on Wikipedia) has support for C and C++ languages. It does not generate a completely self-contained C/C++ source code file. All it does is the generation of Lexer/Parser tables which can be consumed by the "parsing engine".
To accomplish your task (or something similar) I did the following:
Prepare your LALR grammar in Gold's format
Generate parsing tables (one binary file)
Use an old trick to convert the binary file into a header file like
unsigned char ParseTable[] = { ... };
Modify the loader from the "parsing engine" sources (or use the C version which supports in-memory loading, as I remember)
Combine the sources for the GPEngine (if it is a C++ version) into the .h/.cpp pair.
Append the ParseTable to .cpp
Sure, it's not that straightforward, but all the steps can in principle be done within a single "combine" script which can be used with a number of grammars.
I guess the major drawback is the fact that GOLD is closed-source and windows-only (it means that to produce the parsing tables you have to use Windows machine).
ANTLR can generate C++ code although IMHO I find the support for C++ is a bit weak, it is more like C code. Still it is a good environment to work with ANTLRWorks giving you a graphical representation of your syntax tree.
The output from flex+bison consists of two .c files and one .h file. These are completely stand-alone, in that they are all you need to compile into your application to make use of the parser. There are no additional libraries or headers needed (beside the standard C ones).
Unless I've misunderstood your requirements, you definitely can do what you want with flex+bison.

How to write a C++ code generator that takes C++ code as input?

We have a CORBA implementation that autogenerates Java and C++ stubs for us. Because the CORBA-generated code is difficult to work with, we need to write wrappers/helpers around the CORBA code. So we have a 2-step code generation process (yes, I know this is bad):
CORBA IDL -> annoying CORBA-generated code -> useful wrappers/helper functions
Using Java's reflection, I can inspect the CORBA-generated code and use that to generate additional code. However, because C++ doesn't have reflection, I am not sure how to do this on the C++ side. Should I use a C++ parser? C++ templates?
TLDR: How to generate C++ code using generated C++ code as input?
Have you considered to take a step back and use the IDL as source for a custom code generator? Probably you have some wrapper code that hides things like duplicate, var, ptr, etc. We have a Ruby based CORBA IDL compiler that currently generates Ruby and C++ code. That could be extended with a customer generator, see https://www.remedy.nl for RIDL and R2CORBA.
Another option would be to check out the IDL to C++11 language mapping, more details on https://www.taox11.org. This new language mapping is much easier to use and uses standard types and STL containers to work with.
GCC XML could help in recovering the interface.
I'm using it to write a Prolog foreign interface for OpenGL and Horde3D rendering engine.
The interfaces I'm interested to are limited to C, but GCC XML handles C++ as well.
GCC XML parse source code interface and emits and XML AST. Then with an XML library it's fairly easy extract requested info. A nuance it's the lose of macro' symbols: AFAIK just the values survive to the parse. As an example, here (part of ) the Prolog code used to generate the FLI:
make_funcs(NameChange, Xml, FileName, Id) :-
index_id(Xml, Indexed),
findall(Name:Returns:ArgTypes,
(xpath(Xml, //'Function'(#file = Id, #name = Name, #returns = ReturnsId), Function),
typeid_indexed(Indexed, ReturnsId, Returns),
findall(Arg:Type, (xpath(Function, //'Argument'(#name = Arg, #type = TypeId), _),
typeid_indexed(Indexed, TypeId, Type)), ArgTypes)
),
AllFuncs),
length(AllFuncs, LAllFuncs),
writeln(FileName:LAllFuncs),
fat('prolog/h3dplfi/~s.cpp', [FileName], Cpp),
open(Cpp, write, Stream),
maplist(\X^((X = K-A -> true ; K = X, A = []), format(Stream, K, A), nl(Stream)),
['#include "swi-uty.h"',
'#include <~#>'-[call(NameChange, FileName)]
]),
forall(member(F, AllFuncs), make_func(Stream, F)),
close(Stream).
xpath (you guess it) it's the SWI-Prolog library that make analysis simpler...
If you want to reliably process C++ source code, you need a program transformation tool that understands C++ syntax and semantics, can parse C++ code, transform the parsed representation, and regenerate valid C++ code (including the original comments). Such a tool provides in effect arbitrary metaprogramming by operating outside the language, so it is not limited by the "reflection" or "metaprogramming" facilities built into the language.
Our DMS Software Reengineering Toolkit with its C++ Front End can do this.
It has been used on a number of C++ automated transformation tasks, both (accidentally) related to CORBA-based activities. The first included reshaping interfaces for a proprietary distributed system into CORBA-compatible facets. The second reshaped a large CORBA-based application in the face of IDL changes; such changes in effect cause the code to be moved around and causes signature changes. You can find technical papers at the web site that describe the first activity; the second was done for a major defense contractor.
Take a look at Clang compiler, aside from being a standalone compiler it is also intended to be used as an library in situations like the one you describe. It will provide you with parse tree on which you could do your analysis and transformations

Getting AST for C++?

I'm looking to get an AST for C++ that I can then parse with an external program. What programs are out there that are good for generating an AST for C++? I don't care what language it is implemented in or the output format (so long as it is readily parseable).
My overall goal is to transform a C++ unit test bed to its corresponding C# wrapper test bed.
You can use clang and especially libclang to parse C++ code. It's a very high quality, hand written library for lexing, parsing and compiling C++ code but it can also generate an AST.
Clang also supports C, Objective-C and Objective-C++. Clang itself is written in C++.
Actually, GCC will emit the AST at any stage in the pipeline that interests you, including the GENERIC and GIMPLE forms. Check out the (plethora of) command-line switches begining with -fdump- — e.g. -fdump-tree-original-raw
This is one of the easier (…) ways to work, as you can use it on arbitrary code; just pass the appropriate CFLAGS or CXXFLAGS into most Makefiles:
make CXXFLAGS=-fdump-tree-original-raw all
… and you get “the works.”
Updated: Saw this neat little graphing system based on GCC AST's while checking my flag name :-) Google FTW.
http://digitocero.com/en/blog/exporting-and-visualizing-gccs-abstract-syntax-tree-ast
Our C++ Front End, built on top of our DMS Software Reengineering Toolkit can parse a variety of C++ dialects (including C++11 and ObjectiveC) and export that AST as an XML document with a command line switch. See example ASTs produced by this front end.
As a practical matter, you will need more than the AST; you can't really do much with C++ (or any other modern language) without an understanding of the meaning and scope of each identifier. For C++, meaning/scope are particularly ugly. The DMS C++ front end handles all of that; it can build full symbol tables associating identifers to explicit C++ types. That information isn't dumpable in XML with a command line switch, but it is "technically easy" to code logic in DMS to walk the symbol table and spit out XML. (there is an option to dump this information, just not in XML format).
I caution you against the idea of manipulating (or even just analyzing) the XML. First, XSLT isn't a particularly good way to understand the meaning of the ASTs, let alone transform the AST, because the ASTs represent context sensitive language structures (that's why you want [nee MUST HAVE] the symbol table). You can read the XML into a dom-like tree if you like and write your own procedural code to manipulate it. But source-to-source transformations are an easier way; you can write your transformations using C++ notation rather than buckets of code goo climbing over a tree data structure.
You'll have another problem: how to generate valid C++ code from the transformed XML. If you don't mind spitting out raw text, you can solve this problem in purely ad hoc ways, at the price of having no gaurantee other than sweat that generated code is syntactically valid. If you want to generate a C++ representation of your final result as an AST, and regenerate valid text from that, you'll need a prettyprinter, which are not technically hard but still a lot of work to build especially for a language as big as C++.
Finally, the reason that tools like DMS exist is to provide the vast amount of infrastructure it takes to process/manipulate complex structure such as C++ ASTs. (parse, analyse, transform, prettyprint). You can try to replicate all this machinery yourself, but this is usually a poor time/cost/productivity tradeoff. The claim is it is best to stay within the tool ecosystem rather than escape it and build bad versions of it yourself. If you haven't done this before, you'll find this out painfully.
FWIW, DMS has been used to carry out massive analysis and transformations on C++ source code. See Publications on DMS and check the papers by Akers on "Re-engineering C++ Component Models".
Clang is based on the same kind of philosophy; there's an ecosystem of tools.
YMMV, but I'd be surprised.

Is there a better (more modern) tool than lex/flex for generating a tokenizer for C++?

I recent added source file parsing to an existing tool that generated output files from complex command line arguments.
The command line arguments got to be so complex that we started allowing them to be supplied as a file that was parsed as if it was a very large command line, but the syntax was still awkward. So I added the ability to parse a source file using a more reasonable syntax.
I used flex 2.5.4 for windows to generate the tokenizer for this custom source file format, and it worked. But I hated the code. global variables, wierd naming convention, and the c++ code it generated was awful. The existing code generation backend was glued to the output of flex - I don't use yacc or bison.
I'm about to dive back into that code, and I'd like to use a better/more modern tool. Does anyone know of something that.
Runs in Windows command prompt (Visual studio integration is ok, but I use make files to build)
Generates a proper encapsulated C++ tokenizer. (No global variables)
Uses regular expressions for describing the tokenizing rules (compatible with lex syntax a plus)
Does not force me to use the c-runtime (or fake it) for file reading. (parse from memory)
Warns me when my rules force the tokenizer to backtrack (or fixes it automatically)
Gives me full control over variable and method names (so I can conform to my existing naming convention)
Allows me to link multiple parsers into a single .exe without name collisions
Can generate a UNICODE (16bit UCS-2) parser if I want it to
Is NOT an integrated tokenizer + parser-generator (I want a lex replacement, not a lex+yacc replacement)
I could probably live with a tool that just generated the tokenizing tables if that was the only thing available.
Ragel: http://www.complang.org/ragel/ It fits most of your requirements.
It runs on Windows
It doesn't declare the variables, so you can put them inside a class or inside a function as you like.
It has nice tools for analyzing regular expressions to see when they would backtrack. (I don't know about this very much, since I never use syntax in Ragel that would create a backtracking parser.)
Variable names can't be changed.
Table names are prefixed with the machine name, and they're declared "const static", so you can put more than one in the same file and have more than one with the same name in a single program (as long as they're in different files).
You can declare the variables as any integer type, including UChar (or whatever UTF-16 type you prefer). It doesn't automatically handle surrogate pairs, though. It doesn't have special character classes for Unicode either (I think).
It only does regular expressions... has no bison/yacc features.
The code it generates interferes very little with a program. The code is also incredibly fast, and the Ragel syntax is more flexible and readable than anything I've ever seen. It's a rock solid piece of software. It can generate a table-driven parser or a goto-driven parser.
Flex also has a C++ output option.
The result is a set of classes that do that parsing.
Just add the following to the head of you lex file:
%option C++
%option yyclass="Lexer"
Then in you source it is:
std::fstream file("config");
Lexer lexer(&file)
while(int token = lexer.yylex())
{
}
Boost.Spirit.Qi (parser-tokenizer) or Boost.Spirit.Lex (tokenizer only). I absolutely love Qi, and Lex is not bad either, but I just tend to take Qi for my parsing needs...
The only real drawback with Qi tends to be an increase in compile time, and it is also runs slightly slower than hand-written parsing code. It is generally much faster than parsing with regex, though.
http://www.boost.org/doc/libs/1_41_0/libs/spirit/doc/html/index.html
There's two tools that comes to mind, although you would need to find out for yourself which would be suitable, Antlr and GoldParser. There are language bindings available in both tools in which it can be plugged into the C++ runtime environment.
boost.spirit and Yard parser come to my mind. Note that the approach of having lexer generators is somewhat substituted by C++ inner DSL (domain-specific language) to specify tokens. Simply because it is part of your code without using an external utility, just by following a series of rules to specify your grammar.