I need to find a compatible method, if exists, in a class code. Is there some simpler way in Roslyn to do that otherwise I have to compare method names, number and types of arguments. It wouldn't be a big deal if we would not have to deal with arguments' non-fully qualified types and inheritance.
Roslyn provides all the pieces you need. You might want to look at the SymbolEquivalenceComparer that lives in the Roslyn codebase (but it's not public) for inspiration on how to do the comparison. You'll have to do the comparison checks yourself, but that should be a whopping 20 lines of code or so if you do it right.
As an important note, make sure you're working with the semantic model of Roslyn versus just syntax. You mentioned non-qualified types -- as long as you're working with semantics syntax differences like that are taken care of for you.
Related
So C++20 Introduces a new thing called concept, which from what I see is used to constrain types of data that could be put into a template. So for a function, I could require that data that's fed in to like, must have a member ::inner, or something like that.
Which to me, it's like making sure whose using that function couldn't just put whatever they like into the argument. But doesn't explicit instantiation already doing the same stuff? Like if I wrote a function library, and I didn't directly wrote the implementation directly into header files, but rather wrote it in a separate .cpp file and also explicit instantiate them. Doesn't such approach defeats the usage of concept? As if me, the developer, is instantiate some data types to some function, I'm already guaranteeing that it'll work as expected when fed into the function's argument. And if I didn't instantiate a function for a class, then you simply couldn't call it.
In such case, is there any reason for me to implement concept? Except that it seems C++20's concept error is more clear than the error you'll receive without concept.
I'm setting aside the design choice of using templates only to explicitly instantiate everything. Maybe you need that maybe you don't, but concepts is a valuable tool regardless.
First of all a well defined concept will provide "in-code" documentation of what the characteristics of expected types are. If you instantiate something with int and Duck, it's not going to be clear what an int and a Duck have in common to be able to use the same template. Whereas if they were sharing for example the copyable concept it becomes apparent what instantiations have in common or why the generalization was made.
Secondly your library might need extensions (I mean if it's not dead code, it will need amending of some sort sooner or later). By expressing the type requirements, you communicate not only restrictions but intent as well; this is extremely valueable for code extensibility.
Lastly it makes your design process clear(er). If you're using templates in the first place, it is a good practice to be able to formally verify your type system, predict connections and dead ends and put some extra thought on what you actually want to generalize over. An amazing example of how concepts benefit this process, can be seen with named requirements. The Standards committee put a tremendous effort into formalizing the properties of types when defining standard library facilities so e.g. an algorithm may be defined on Containers of Trivially Copyable elements. Up until concepts, the burden of verifying and checking those types fell on the developer, since there was no formal way of expressing those requirements; now we're transitioning to concepts making the definition and checking of such properties a formal process, backed by the core language.
In support of the 2nd & 3rd points consider SFINAE techniques vs concepts. Using templates is much more than what you expose in an interface, so your library may internally rely on type restrictions to choose the correct compilation path. This process is cleanly defined with concepts, whereas legacy approaches tend to overcrowd your code.
I've searched around for the concept of GADT in OCaml, why we need it and when to use it, etc.
I understand GADT is not only in OCaml but a more general term.
I've found
What are GADTs?
http://caml.inria.fr/pub/docs/manual-ocaml-400/manual021.html#toc85
http://www.reddit.com/r/ocaml/comments/1jmjwf/explain_me_gadts_like_im_5_or_like_im_an/
etc, but some of them are in Haskell, and others do not have a good comparison example between no GADT and GADT.
So what I would like is a simple yet good concrete example where I can see if without GADT, things are bad.
Can I have that please?
GADTs are useful for two reasons.
The first one (and the most common one) is about dynamic typing: you can add some dynamic typing without losing static checking of it. It is not simple though, but you can be sure through it that your type conditions will be met.
The simplest example of that is given in the ocaml manual.
This was used for instance in the standard library to rewrite printf in a type safe manner (before that, it was a pretty horrible collection of Obj.magic)
The second reason you may want to use GADTs is when you have some complex invariant you want to maintain on your type structure. This is pretty hard to express though, and you often have to put a lot of effort to do that.
Well, I don't have any example handy, but I once saw a friend write down an implementation of AVL-trees were it was proven by the typing system that balancing was right, which is pretty cool.
For more one the GADT feature, and its good use cases, You can read the pretty good blog post by Mads Hartmann.
I'm also in a search of good application of GADT, as most of the time, when I use them sooner or later I discover, that the same can be done without them, and usually in a much more cleaner way. So, this is not a complete survey, just a bit of my own experience.
Universal values, aka existentials. They allow you to create heterogenous containers and typesafe serialization. See, for examples Core's Univ and Univ_map modules.
Type-safe evaluators for syntax trees. Here GADTs are useful to remove some runtime checks.
Pure and type-safe Printf implementation, that is already a part of OCaml, was also rewritten using GADT
Here is a real life example of how GADT can be used. In the example, I use GADT to specify table relations, e.g., one_to_one, one_to_many, etc. Depending on the used relation the function type is inferred accordingly. For example, one_to_maybe_one relation, returns a function 'a -> 'b option, one_to_many creates a function with 'a -> 'b list. The same can be achieved by just creating several different functions, like link_one_to_one, link_one_to_many, etc instead of one function link ~one_to:relation. So, one can consider this approach as arguable.
I wondered if there are any simpler or more powerful syntax for C or C++. I have already come across SPECS. That is an alternative syntax for C++. But are there any others and what about C?
It could also be a sort of code generator so that things like functors could be defined less verbosely. I imagine it could be made as a code generator that compiles to C or C++ code which is very similar to the code you wrote in the alternative syntax.
Mirah is an example of doing this for Java.
Ideally I would want to write C in Go like syntax. I like how they fixed switch-case, and in general made everything much less verbose.
#define BEGIN {
#define END }
No! Just say NO!
The only general-purpose tool that I'm aware of is Lazy C++, which lets you create a single .lzz source file from which it can generate the .h and .cpp files.
There are also numerous approaches to doing code generation for C++. (For examples, see Cog, Pump, or Wikipedia's list.) These aren't full-fledged alternate syntaxes, but they can help with particular categories of syntax (such as automatically generating templates taking 1 to N arguments, to work around the lack of variadic templates).
Instead of a change in syntax, consider a change in abstraction: Increase your abstraction with a custom-defined DSL. Tool support would be necessary to reach optimal productivity.
If your goal is simplification, a lightweight modeling approach, either text-based (like XText), graph-based (like MetaEdit+) or tree-based (like AtomWeaver) would remove some complexity on the project by simplifying the solution.
If it is only a syntax you're after, why can't you define your own, as a trivial preprocessor->parser->C-pretty-printer chain? It will be no more than a semantically reach preprocessor, something of a CamlP4 style, but for C. No one but you knows what kind of syntax you'd find suitable, so its implementation is entirely up to you.
It doesn't look to me like SPECS is really C++ anymore, I certainly would have a hard time reading such code (at least initially).
You should pick a language based on your needs, not pick a specific language and then modify it to fit what you want to do.
If you want to program Go, then program in Go, don't try to write C in a Go-like syntax as that'll just make it hard for anyone who actually knows C to read your code.
Some of the disadvantages would be
its syntax is complex
compiler generates extra code
They are hard to validate. Template code which doesn't get used tends to be seldom compiled at all. Therefore good coverage of test cases is a must. But testing is time-consuming, and then it may turn out the code never needed to be robust in the first place.
Hmm, how about...
3: They can be slow to compile
4: They force things to be calculated at compile time rather than run time (this can also be an advantage, if you prefer fast execution speed over runtime flexibility)
5: Older C++ compilers don't handle them, or don't handle them correctly
6: The error messages that they generate when you don't get the code right can be nearly incomprehensible
Templates expose your implementation to the clients of your code, which makes maintaining your ABI harder if you pass templated objects at library boundaries.
So far no-one seems to have mentioned the main disadvantage I find with templates: code readability plummets!
I'm not referring to syntax issues -- yes the syntax is ugly, but I can forgive that. What I mean is this: I find that with never-seen-before non-templated code, however large the application is, if I start at main() I can usually decode the broad strokes of what a program is doing without problems. And code that merely uses vector<int> or similar doesn't bother me in the slightest. But once code starts to define and use its own templates for purposes beyond simple container types, understandability rapidly goes out the window. And that has very negative implications for code maintenance.
Part of that is unavoidable: templates afford greater expressiveness via the complicated partial-order overload resolution rules (for function templates) and, to a lesser degree, partial specialisation (for class templates). But the rules are so damn complicated that even compiler writers (who I'm happy to acknowledge as being an order of magnitude smarter than I am) are still getting them wrong in corner cases.
The interaction of namespaces, friends, inheritance, overloading, automatic conversions and argument-dependent lookup in C++ is already complicated enough. But when you add templates into the mix, as well as the slight changes to rules for name lookup and automatic conversions that they come with, the complexity can reach proportions that, I would argue, no human can deal with. I just don't trust myself to read and understand code that makes use of all these constructs.
An unrelated difficulty with templates is that debuggers still have difficulty showing the contents of STL containers naturally (as compared to, say, C-style arrays).
The only real disadvantage is that if you make any tiny syntax error in a template (especially one used by other templates) the error messages are not gonna be helpful... expect a couple pages of almost-unusable error msgs;-). Compilers' defect are very compiler-specific, and the syntax, while ugly, is not really "complex". All in all, though -- despite the huge issue with proper error diagnostics -- templates are still the single best thing about C++, the one thing that might well tempt you to use C++ over other languages with inferior implementations of generics, such as Java...
They're complicated for the compiler to parse which means your compilation time will increase. Also it can be hard to parse compiler error messages if you have advanced template constructions.
Less people understand them, epsecially at the level of meta programming, therefore less people can maintain them.
When you use templates, your compiler only generates what you actually use. I don't think there is any disadvantages in using C++ template meta-programming except the compiling time which can be quite long if you used very complex structures as boost or loki libraries do.
A disadvantage: template errors are only detected by the compiler when the template is instantiated. Sometimes, errors in the methods of templates are only detected when the member method is instantiated, regardless if the rest of the template is instantiated.
If I have an error in a method, of a template class, that only one function references, but other code uses the template without that method, the compiler will not generate an error until the erroneous method is instantiated.
The absolute worst: The compiler error messages you get from bad template code.
I have used templates sometimes over the years. They can be handy but from a professional perspective, I am leaning away from them. Two of the reasons are:
1.
The need to either a.) expose the function definitions (not only declarations) "source" code to the "where used" code or b.) create a dummy instantiation in the source file. This is needed for compilation. Option a.) can be done by defining functions in the header or actually including the cpp.
One of the reasons that we tolerate headers in C++ (compared to C# for example) is because of the separation of "interface" from "implementation". Well, templates seem to be inconsistent with this philosophy.
2.
Functions called by a template type parameter instantiation may not be enforced at compile time resulting in link errors. E.g. T example; example.CompilerDoesntKnowIfThisFunctionExistsOnT();
This is "loose" IMHO.
Solutions:
Rather then templates, I lean towards using a base class whereby the derived/container classes know what is available at compile time. The base classes can provide the generic methods and "types" that templates are often used for. This is why source code availability can be helpful if existing code needs to be modified to insert a generic base class in the inheritance hierarchy where needed. Otherwise if, code is closed source, rewrite it better using generic base classes instead of using a template as a work around.
If type is unimportant e.g. vector< T > then how about just using"object". C++ has not provided an "object" keyword and I have proposed to Dr. Bjarne Stroustrup that this would be helpful especially to tell compiler and people reading code that type is not important (for cases when it isn't). I don't that think C++11 has this, perhaps C++14 will?
Does anyone have any references for building a full Object/Class reflection system in C++ ?
Ive seen some crazy macro / template solutions however ive never found a system which solves everything to a level im comfortable with.
Thanks!
Using templates and macros to automatically, or semi-automatically, define everything is pretty much the only option in C++. C++ has very weak reflection/introspection abilities. However, if what you want to do is mainly serialization and storage, this has already been implemented in the Boost Serialization libraries. You can do this by either implementing a serializer method on the class, or have an external function if you don't want to modify the class.
This doesn't seem to be what you were asking though. I'm guessing you want something like automatic serialization which requires no extra effort on the part of the class implementer. They have this in Python, and Java, and many other languages, but not C++. In order to get what you want, you would need to implement your own object system like, perhaps, the meta-object system that IgKh mentioned in his answer.
If you want to do that, I'd suggest looking at how JavaScript implements objects. JavaScript uses a prototype based object system, which is reasonably simple, yet fairly powerful. I recommend this because it seems to me like it would be easier to implement if you had to do it yourself. If you are in the mood for reading a VERY long-winded explanation on the benefits and elegance of prototypes, you can find an essay on the subject at Steve Yegge's blog. He is a very experienced programmer, so I give his opinions some credence, but I have never done this myself so I can only point to what others have said.
If you wanted to remain with the more C++ style of classes and instances instead of the less familiar prototypes, look at how Python objects and serialization work. Python also use a "properties" approach to implementing its objects, but the properties are used to implement classes and inheritance instead of a prototype based system, so it may be a little more familiar.
Sorry that I don't have a simpler answer to your question! But hopefully this will help.
I'm not entirely sure that I understood you intention, however the Qt framework contains a powerful meta object system that lets you do most operation expected from a reflection a system: Getting the class name as string, checking if a object is a instance of a given type, listing and invoking methods, etc.
I've used ROOT's Reflex library with good results. Rather than using crazy macro / template solutions like you described, it processes your C++ header files at build time to create reflection dictionaries then operates off of those.