What's the difference between SyntaxGenerator and SyntaxFactory? - roslyn

To be more specific: Microsoft.CodeAnalysis.Editing.SyntaxGenerator and Microsoft.CodeAnalysis.CSharp.SyntaxFactory.
1) My understanding is that they are both part of Roslyn, and both used for generating code, is that correct? If yes, are there others in that category?
2) Is one better and/or newer than the other? I only suspect that because I found this.

Each SyntaxFactory (there's one for VB and one for C#) is tightly coupled to its language. In a sense the SyntaxFactory is a "lower level" API which you can use to create syntax trees. You can control every syntax node, token and trivia in your tree when using a SyntaxFactory. This comes at a cost: It's harder to read and understand source code that uses the SyntaxFactory because it's very verbose.
The SyntaxGenerator provides a language agnostic API which can be used to create pieces of either C# or VB .NET syntax. It provides an API that is easier to use and less verbose than the SyntaxFactory. Note that to create a SyntaxGenerator we need either a Document, Project, or Workspace.
If you read through the implementations of the SyntaxGenerator you'll see that they are using the SyntaxFactory under the hood.
We can compare the two APIs when generating the following code:
void Method()
{
}
SyntaxFactory
var method = SyntaxFactory.MethodDeclaration(
SyntaxFactory.PredefinedType(
SyntaxFactory.Token(SyntaxKind.VoidKeyword)),
SyntaxFactory.Identifier("Method"))
.WithBody(
SyntaxFactory.Block()))))
SyntaxGenerator
SyntaxGenerator generator = SyntaxGenerator.GetGenerator(...); //We need a Document, Project or Workspace
var method = generator.MethodDeclaration("Method");
The generator is just a cleaner way to build up syntax trees.

Related

How detect API breaks in C++

How can I detect incompatible API changes in C++? (not ABI but API changes)
Where compatible changes are things that can't break compilation of code using the API like:
parameter(s) added to method with default argument
methods added to a class
members added to a class
classes added
order of members or methods changed
comments/documentation changes
And incompatible changes are things that potentially break compilation of code using the API like:
removed arguments, (public/protected) methods, members, classes
type changes of arguments or members
name changes of public/protected members or methods
classes moved from one header to another
OP is right to think that C++ parsing is probably necessary. Likely deep reasoning, too.
I think the way to pose the question is,
for a particular set of uses of an API in an existing application, does changing the API change or break of the application?
If you don't limit yourself to a specific set of uses, almost any change to an API will change its semantics. Otherwise, why would you make them (modulo refactoring?). And if you use the full set of API features in the application, then its semantics must change somehow too.
With a specific set of uses, one can arguably determine which properties of the API might affect the specific uses, and determine if in fact they do. Ultimately you have to parse the original code accurately to determine the specific set of uses and the context in which they are used. You also have to determine the semantic properties on which the existing application depends, including the properties provided by the legacy API. Finally, you need to determine the properties defined by the new API, and verify still support the needs of the application.
In general, you need a theorem prover over the program properties to check this. And, while theorem proving technology has advanced significantly over the last 50 years, AFAIK said technology isn't strong enough to take generally arbitrary program properties and prove them, let alone overcome the problem of reasoning about arbitrarily complex programs.
Consider:
// my application
int x=0;
int y=foo(x); // API ensures that fail...
if (y>3) then fail(); // shouldn't happen
exit();
// my legacy API
int foo(int x) { return x+1; }
Now imagine the API is changed to:
// my new API
int foo(int x) { return x+2; }
The application still functions correctly.
How about:
// my new API
int foo(int x) { return TuringMachine(x); }
How are we going to prove that TuringMachine(x) produces a value < 3?
If we can't do this for such tiny programs, how are we going to do it
for ones that we write in practice?
Now, you might be able to limit the set of changes you will consider to
simply "syntactic" operations, such as "move method", "add parameter with initial value", etc.
You'll still need to parse the original program and modified APIs, and check that the syntactic properties imply semantic properties that don't damage the original program. You'll likely need control and dataflow analysis, alias analysis to worry about pointers, etc, and the tool will at best be able to tell for a limited number of cases when no change has occurred.
I'm sure there are research papers on this topic. A quick check at scholar.google.com didn't find anything obvious.
See "Source Compatibility" tab of the report of the ABICC tool. Most of the mentioned API changes and API breaks are detected by this tool.
There are two approaches to use the tool. Original via analysis of header files and a new via analysis of the library debug info ([1]). Use the second one. It's more reliable and simple.
You can find some report examples here: http://abi-laboratory.pro/tracker/
Tutorial: https://sourceware.org/glibc/wiki/Testing/ABI_checker#Usage

framework/library for property-tree-like data structure with generic get/set-implementation?

I'm looking for a data structure which behaves similar to boost::property_tree but (optionally) leaves the get/set implementation for each value item to the developer.
You should be able to do something like this:
std::function<int(void)> f_foo = ...;
my_property_tree tree;
tree.register<int>("some.path.to.key", f_foo);
auto v1 = tree.get<int>("some.path.to.key"); // <-- calls f_foo
auto v2 = tree.get<int>("some.other.path"); // <-- some fallback or throws exception
I guess you could abuse property_tree for this but I haven't looked into the implementation yet and I would have a bad feeling about this unless I knew that this is an intended use case.
Writing a class that handles requests like val = tree.get("some.path.to.key") by calling a provided function doesn't look too hard in the first place but I can imagine a lot of special cases which would make this quite a bulky library.
Some extra features might be:
subtree-handling: not only handle terminal keys but forward certain subtrees to separate implementations. E.g.
tree.register("some.path.config", some_handler);
// calls some_handler.get<int>("network.hostname")
v = tree.get<int>("some.path.config.network.hostname");
search among values / keys
automatic type casting (like in boost::property_tree)
"path overloading", e.g. defaulting to a property_tree-implementation for paths without registered callback.
Is there a library that comes close to what I'm looking for? Has anyone made experiences with using boost::property_tree for this purpose? (E.g. by subclassing or putting special objects into the tree like described here)
After years of coding my own container classes I ended up just adopting QVariantMap. This way it pretty much behaves (and is as flexible as) python. Just one interface. Not for performance code though.
If you care to know, I really caved in for Qt as my de facto STL because:
Industry standard - used even in avionics and satellite software
It has been around for decades with little interface change (think about long term support)
It has excellent performance, awesome documentation and enormous user base.
Extensive feature set, way beyond the STL
Would an std::map do the job you are interested in?
Have you tried this approach?
I don't quite understand what you are trying to do. So please provide a domain example.
Cheers.
I have some home-cooked code that lets you register custom callbacks for each type in GitHub. It is quite basic and still missing most of the features you would like to have. I'm working on the second version, though. I'm finishing a helper structure that will do most of the job of making callbacks. Tell me if you're interested. Also, you could implement some of those features yourself, as the code to register callbacks is already done. It shouldn't be so difficult.
Using only provided data structures:
First, getters and setters are not native features to c++ you need to call the method one way or another. To make such behaviour occur you can overload assignment operator. I assume you also want to store POD data in your data structure as well.
So without knowing the type of the data you're "get"ting, the only option I can think of is to use boost::variant. But still, you have some overloading to do, and you need at least one assignment.
You can check out the documentation. It's pretty straight-forward and easy to understand.
http://www.boost.org/doc/libs/1_61_0/doc/html/variant/tutorial.html
Making your own data structures:
Alternatively, as Dani mentioned, you can come up with your own implementation and keep a register of overloaded methods and so on.
Best

Options for parsing/processing C++ files

So I have a need to be able to parse some relatively simple C++ files with annotations and generate additional source files from that.
As an example, I may have something like this:
//# service
struct MyService
{
int getVal() const;
};
I will need to find the //# service annotation, and get a description of the structure that follows it.
I am looking at possibly leveraging LLVM/Clang since it seems to have library support for embedding compiler/parsing functionality in third-party applications. But I'm really pretty clueless as far as parsing source code goes, so I'm not sure what exactly I would need to look for, or where to start.
I understand that ASTs are at the core of language representations, and there is library support for generating an AST from source files in Clang. But comments would not really be part of an AST right? So what would be a good way of finding the representation of a structure that follows a specific comment annotation?
I'm not too worried about handling cases where the annotation would appear in an inappropriate place as it will only be used to parse C++ files that are specifically written for this application. But of course the more robust I can make it, the better.
One way I've been doing this is annotating identifiers of:
classes
base classes
class members
enumerations
enumerators
E.g.:
class /* #ann-class */ MyClass
: /* #ann-base-class */ MyBaseClass
{
int /* #ann-member */ member_;
};
Such annotation makes it easy to write a python or perl script that reads the header line by line and extracts the annotation and the associated identifier.
The annotation and the associated identifier make it possible to generate C++ reflection in the form of function templates that traverse objects passing base classes and members to a functor, e.g:
template<class Functor>
void reflect(MyClass& obj, Functor f) {
f.on_object_start(obj);
f.on_base_subobject(static_cast<MyBaseClass&>(obj));
f.on_member(obj.member_);
f.on_object_end(obj);
}
It is also handy to generate numeric ids (enumeration) for each base class and member and pass that to the functor, e.g:
f.on_base_subobject(static_cast<MyBaseClass&>(obj), BaseClassIndex<MyClass>::MyBaseClass);
f.on_member(obj.member_, MemberIndex<MyClass>::member_);
Such reflection code allows to write functors that serialize and de-serialize any object type to/from a number of different formats. Functors use function overloading and/or type deduction to treat different types appropriately.
Parsing C++ code is an extremely complex task. Leveraging a C++ compiler might help but it could be beneficial to restrict yourself to a more domain-specific less-powerful format i.e., to generate the source and additional C++ files from a simpler representation something like protobufs proto files or SOAP's WSDL or even simpler in your specific case.
I did some very similar work recently. The research I did indicated that there wasn't any out-of-the-box solutions available already, so I ended up hand-rolling one.
The other answers are dead-on regarding parsing C++ code. I needed something that could get ~90% of C++ code parsed correctly; I ended up using srcML. This tool takes C++ or Java source code and converts it to an XML document, which makes it easier for you to parse. It keeps the comments in-tact. Furthermore, if you need to do a source code transformation, it comes with an reverse tool which will take the XML document and produce source code.
It works in 90% of the cases correctly, but it trips on complicated template metaprogramming and the darkest corners of C++ parsing. Fortunately, my input source code is fairly consistent in design (not a lot of C++ trickery), so it works for us.
Other items to look at include gcc-xml and reflex (which actually uses gcc-xml). I'm not sure if GCC-XML preserves comments or not, but it does preserve GCC attributes and pragmas.
One last item to look at is this blog on writing GCC plugins, written by the author of the CodeSynthesis ODB tool.
Good luck!

Modifying SWIG Interface file to Support C void* and structure return types

I'm using SWIG to generate my JNI layer for a large set of C APIs and I was wondering what is the best practices for the below situations. The below not only pertain to SWIG but JNI in general.
When C functions return pointers to Structures, should the SWIG interface file (JNI logic) be heavily used or should C wrapper functions be created to return the data in pieces (i.e. a char array that contains the various data elements)?
When C Functions return void* should the C APIs be modified to return the actual data type, whether it be primitive or structure types?
I'm unsure if I want to add a mass amount of logic and create a middle layer (SWIG interface file/JNI logic). Thoughts?
My approach to this in the past has been to write as little code as possible to make it work. When I have to write code to make it work I write it in this order of preference:
Write as C or C++ in the original library - everyone can use this code, you don't have to write anything Java or SWIG specific (e.g. add more overloads in C++, add more versions of functions in C, use return types that SWIG knows about in them)
Write more of the target language - supply "glue" to bring some bits of the library together. In this case that would be Java.
It doesn't really matter if this is "pure" Java, outside of SWIG altogether, or as part of the SWIG interface file from my perspective. Users of the Java interface shouldn't be able to distinguish the two. You can use SWIG to help avoid repetition in a number of cases though.
Write some JNI through SWIG typemaps. This is ugly, error prone if you're not familiar with writing it, harder to maintain (arguably) and only useful to SWIG+Java. Using SWIG typemaps does at least mean you only write it once for every type you wrap.
The times I'd favour this over 2. is one or more of:
When it comes up a lot (saves repetitious coding)
I don't know the target language at all, in which case using the language's C API probably is easier than writing something in that language
The users will expect this
Or it just isn't possible to use the previous styles.
Basically these guidelines I suggested are trying to deliver functionality to as many users of the library as possible whilst minimising the amount of extra, target language specific code you have to write and reducing the complexity of it when you do have to write it.
For a specific case of sockaddr_in*:
Approach 1
The first thing I'd try and do is avoid wrapping anything more than a pointer to it. This is what swig does by default with the SWIGTYPE_p_sockaddr_in thing. You can use this "unknown" type in Java quite happily if all you do is pass it from one thing to another, store in containers/as a member etc., e.g.
public static void main(String[] argv) {
Module.takes_a_sockaddr(Module.returns_a_sockaddr());
}
If that doesn't do the job you could do something like write another function, in C:
const char * sockaddr2host(struct sockaddr_in *in); // Some code to get the host as a string
unsigned short sockaddr2port(struct sockaddr_in *in); // Some code to get the port
This isn't great in this case though - you've got some complexity to handle there with address families that I'd guess you'd rather avoid (that's why you're using sockaddr_in in the first place), but it's not Java specific, it's not obscure syntax and it all happens automatically for you besides that.
Approach 2
If that still isn't good enough then I'd start to think about writing a little bit of Java - you could expose a nicer interface by hiding the SWIGTYPE_p_sockaddr_in type as a private member of your own Java type, and wrapping the call to the function that returns it in some Java that constructs your type for you, e.g.
public class MyExtension {
private MyExtension() { }
private SWIGTYPE_p_sockaddr_in detail;
public static MyExtension native_call() {
MyExtension e = new MyExtension();
e.detail = Module.real_native_call();
return e;
}
public void some_call_that_takes_a_sockaddr() {
Module.real_call(detail);
}
}
No extra SWIG to write, no JNI to write. You could do this through SWIG using %pragma(modulecode) to make it all overloads on the actual Module SWIG generates - this feels more natural to the Java users probably (it doesn't look like a special case) and isn't really any more complex. The hardwork is being done by SWIG still, this just provides some polish that avoids repetitious coding on the Java side.
Approach 3
This would basically be the second part of my previous answer. It's nice because it looks and feels native to the Java users and the C library doesn't have to be modified either. In essence the typemap provides a clean-ish syntax for encapsulating the JNI calls for converting from what Java users expect to what C works with and neither side knows about the other side's outlook.
The downside though is that it is harder to maintain and really hard to debug. My experience has been that SWIG has a steep learning curve for things like this, but once you reach a point where it doesn't take too much effort to write typemaps like that the power they give you through re-use and encapsulation of the C type->Java type mapping is very useful and powerful.
If you're part of a team, but the only person who really understands the SWIG interface then that puts a big "what if you get hit by a bus?" factor on the project as a whole. (Probably quite good for making you unfirable though!)

Going through members of a C++ class

As far as I know, if I have a class such as the following:
class TileSurface{
public:
Tile * tile;
enum Type{
Top,
Left,
Right
};
Type type;
Point2D screenverts[4]; // it's a rectangle.. so..
TileSurface(Tile * thetile, Type thetype);
};
There's no easy way to programatically (using templates or whatever) go through each member and do things like print their types (for example, typeinfo's typeid(Tile).name()).
Being able to loop through them would be a useful and easy way to generate class size reports, etc. Is this impossible to do, or is there a way (even using external tools) for this?
Simply not possible in C++. You would need something like Reflection to implement this, which C++ doesn't have.
As far as your code is concerned after it is compiled, the "class" doesn't exist -- the names of the variables as well as their types have no meaning in assembly, and therefore they aren't encoded into the binary.
(Note: When I say "Not possible in C++" I mean "not possible to do built into the language" -- you could of course write a C++ parser in C++ which could implement this sort of thing...)
No. There are no easy way. If to put "easy way" aside then with C++ you can do anything imaginable.
If you want just to dump your data contents run-time then simplest way is to implement operator<<(ostream&,YourClass const&) for each YourClass you are interested in. Bit more complex is to implement visitor pattern, but with visitor pattern you may have different reports done by different visitors and also the visitors may do other things, not only generate reports.
If you want it as static analysis (program is not running, you want to generate reports) then you can use debugger database. Alternatively you may analyze AST generated by some compilers (g++ and CLang have options to generate it) and generate reports from it.
If you really need run-time reflection then you have to build it into your classes. That involves overhead. For example you may use common base-classes and put all data members of classes into array too. It is often done to communicate with applications written in languages that have reflection on more equal grounds (oldest example is Lisp).
I beg to differ from the conventional wisdom. C++ does have it; it's not part of the C++ standard, but every single C++ compiler I've seen emits metadata of this sort for use by the debugger.
Moreover, two formats for the debug database cover almost all modern compilers: pdb (the Microsoft format) and dwarf2 (just about everything else).
Our DMS Software Reengineering Toolkit is what you call an "external tool" for extractingt/transforming arbitrary code. DMS is generalized compiler technology parameterized by explicit langauge definitions. It has language definitions for C, C++, Java, COBOL, PHP, ...
For C, C++, Java and COBOL versions, it provides complete access to parse trees, and symbol table information. That symbol table information includes the kind of data you are likely to want from "reflection". If you goal is to enumerate some set of fields or methods and do something with them, DMS can be used to transform the code (or generate derived code) according to what you find in the symbol tables in arbitrary ways.
If you derive all types of the member variables from your common typeinfo-provider-baseclass, then you can get that. It is a bit more work than like in Java, but possible.
External tools: you mentioned that you need reports like class size, etc.--
Doxygen could help http://www.doxygen.nl/manual/features.html to generate class member lists (including inherited members).