I'm playing around with making an analyzer for Roslyn. The one I'm making is a diagnostic that finds methods that are too long. I'd like to make whatever is considered 'too long' configurable, preferably one configuration for an entire solution or project. What would be the best way to go about this?
The only option I have in mind is to search the assembly for a particular configuration attribute. This would require an attribute for each project in the solution. Also it requires the user of the diagnostic to reference a library specific to the diagnostic that defines this attribute.
Is this a good idea, and what are the other options?
You can pass additional files to the analyzers. These can then be reached from the analysis context. But this approach is not that developed yet in Roslyn. For example if the file changes, the analyzers are not notified about the change.
For an example you can check out the SonarLint repository.
Also, keep an eye on this GitHub issue, where the discussion is going on how parameters and data sharing should be done in the upcoming Roslyn version.
Related
What is the best way to restrict use of certain headers(features of the library itself) in certain Cpp files. And if it fails to follow the set rules, compilation should halt.
This is not about finding out superfluous includes. This is about restricting the developers to the applicaiton framework.
For example if there exists a osUtils class as osUtils.h and if as per this, this application's development framework mandates use of osUtils.h for filesystem operation like to make a folder. but there are always a chance that individual module finds it convenience to break this rule by including sys/stat and use a mkdir() method. But if the intention of providing a framework here lets say for cross-platform abstraction or special path handling logic, the objective is lost by doing it out of framework. Is there a way to restrict this? like restricting the usage of sys/stat.h in certain files (except for osUtils.h file in this case) can help solve the problem. but how to implement it so it will not compile if the rule is broken.
I don't know how to do this by breaking compilation - the idea of compilation failure because of a valid code don't appeal to me. I've got some other ideas:
Code review. If done right this should prevent such errors.
I am pretty sure that some static code analysis tool can help detect
those things (they can check things like 'include what you use', so
a rule 'don't include 'XYZ' should be there)
If you have this static analysis tool ready there is a problem with getting people to use it and fix errors shown by it. One option that you can use is git hook. If the new code don't pass the static analysis - reject the commit. If you cannot use hooks, or don't want to - make a separate CI job that will check for the violations of the static checking. Then you'll see who and when pushed some bad code.
i am writing a Roslyn analyzer and i need to store some data, and so my analyzers can share data between them. Or i want to save the state of my analyzer and again i want to save some data to a db or write to a file. Is there any option to store data while using Analyzer with Code fix template?
ability to share resources between analyzers will be added by the issue #Srivatsn Narayanan mentioned. but that doesn't mean you can share state.
for state to be useful between analyzers, you need some kind of dependency or execution ordering guarantee between analyzers which don't exist.
PS:
you can ask adding support for such guarantee but since that will make whole system way more complex, and there is a workaround author can do himself (by creating one analyzer and do everything there itself) probably won't be supported any time soon.
There is currently no easy API to share state across analyzer instances. We are discussing adding such a feature in this issue. However, what that API would do can simply be achieved by you having a type that exposes a static ConditionalWeakTable and store your data there and share that across your analyzers. You do need to be careful to make sure that you are not leaking compilations.
The analyzer produced by the template is a portable library project and so doesn't have access to many of the file\db APIs. You could convert your analyzer to be a project that targets .NET 4.5.2 and then use those APIs. However, I'd strongly recommend avoiding that if possible for two reason - one is that the analyzers will be executed on every keystroke in VS and making a db call that often would degrade perf. The second is that non-portable analyzer will be specific to VS and won't be able to run for ASP.NET 5 or .NET CLI
I am using Emacs + Tuareg mode to do my OCaml project.
It is working fine and I get used to it.
However, along with my project source base getting bigger and bigger, I find managing the project is getting harder and harder.
Especially for refactoring. If I change a module name or function name, I have to search everywhere for the part that need to changed accordingly or I just constantly compile again and again to let compiler tell me where I should go.
It is not convenient.
Anyone can suggest a good way for source base management?
thanks
A good option is TypeRex. This is an alternative Emacs mode created by OCamlPro that has a bunch of OCaml-aware features including proper support for refactoring (like renaming identifiers).
It also has a bunch of other nice features like good auto-complete, semantic grep and so on.
Unfortunately, this involves changing your build process to use some wrapper programs. These generate the additional information the mode needs to function. However, once you get the build set up, it's a really awesome editing environment.
This is a potentially dangerous question because interdisciplinary questions and answers will be biased, but I'll have a stab at it anyway. All in good spirit!
So, here we go. I'm writing a major editing mode for Emacs for the language that it has almost no support for yet. And I'm at the point, where I have to decide on a way to generate project files. Below is the syllabus of the task ahead:
The templates have to represent project directory tree, not only single files.
The resulting files are of various formats, potentially including SGML-like languages, but not limited to this variety. They also have to generate C-like source code and, eLisp source code and plain text files, like README, for example.
The templates must be processed in a batch upon user-initiated action (as in user wants to create a project - several files must be created in the user-appointed directory). It may be beneficial to have an ability to supervise the creation, but this is less important then the ability to run the process entirely automatically.
Bonus features:
The template language has already a user base (with a potential of reuse of existing templates).
The templates can be used for code snippets (contain blanks which are filled interactively once the user invokes code-generating routine while editing the file).
Obvious things like cross-platform-ness, ease of use both through graphical interface and command line.
I made a research, but I won't share my results (yet) so I won't bias the answers. The problem with answering this question is not that the answer is hard to find, but that it is hard to chose one from many.
I'm developing a system based on Mustache for exactly the use case that you've described. The template language itself is a very simple extension of Mustache called Groome.
I also released a command-line tool called Molt that renders Groome templates. I'd be curious to know if it does everything that you need. I'm still adding features to the tool and haven't yet announced it. Thanks.
I went to solve a similar problem several years aback, where I wanted to use Emacs to generate code out of a UML diagram (cogre), and also generate Makefiles from project specifications. I first tried to use Tempo, but when I tried to get the templates to nest, I ran into problems. I also looked into skeleton, but that didn't quite fit the plan either.
I ended up using Google Templates for a little bit, and liked the syntax, and developed SRecode instead, and just borrowed the good bits from Google templates. SRecode was written specifically for machine-generated code. The interaction for template insertion (aka - what tempo was written for) isn't first class in SRecode. For generating code from a data structure, however, it is very robust, and has a lot of features, and automatically filled variables. It works closely with your major mode, and allows many nested templates, with control over the nested dictionary values. There is a subsystem that will use Semantic tags and generate code from them for a couple languages. That means you can parse code in one language with Semantic, and generate code in another language with SReocde using those tags. Nifty! Many parts of CEDET Reference manuals were built that way.
The templates themselves allow looping, if statements, and include statements. There are a couple examples in SRecode for making an 'application', such as the comment writer, and EDE uses it to create Makefiles, which is almost exactly what you are trying to do.
Another option is Generator, which offers “language-agnostic project bootstrapping with an emphasis on simplicity”. Installation requires Node.js and npm.
Generator’s emphasis on simplicity means it is very easy to learn how to make a template. Generator also saves you from having to reference templates by file paths – it looks for templates in ~/.generator.
However, there is no way to write README or LICENSE files for the template itself without those files being copied to the generated project. Also, post-generation commands written in the Makefile will be copied to the generated Makefile, even after they are no longer of use. Finally, the ad-hoc templating language doesn’t provide a way to escape its __lowercasevariables__ – though I can’t think of a language where that limitation would be a problem.
While working within C++ libraries, I've noticed that I am not granted any intellisense while inside directive blocks like "#ifndef CLIENT_DLL ... #endif". This is obviously due to the fact that "CLIENT_DLL" has been defined. I realize that I can work around this by simply commenting out the directives.
Are there any intellisense options that will enable intellisense regardless of directive evaluation?
By getting what you want, you would lose a lot.
Visual C++ IntelliSense is based on a couple major presumptions
1. that you want good/usable results.
2. that your current IntelliSense compiland will present information related to the "configuration" you are currently in.
Because your current configuration has that preprocessor directive, you will not be able to get results from the #ifndef region.
The reason makes sense if you think it through. What if the IntelliSense compiler just tried to compile the region you were in, regardless of #ifdef regions? You would get nonsense and non-compilable code. It would not be able to make heads or tails of your compiland.
I can imagine a very complex solution where it runs a smaller (new) parse on the region you are in, with only that region being assumed to be part of the compiland. However, there are so many holes in this approach (like nothing in that region being declared/defined) that this possible approach would immediately frustrate you, except in very very simple scenarios.
Generally it's best to avoid logic in #ifdef regions, and instead to delegate the usage of parameterized compilation to entire functions, so that the front-end of the compiler is always compiling those modules, but the linker/optimizer will select the correct OBJ later on.
Hope that helps,
Will
Visual Studio 6.0 has a little better support for C++ in some arena's such as this. If you need the intellisense then just comment it out temporarily, build and then you should have intellisense. Just remember to recomment it when you're through if that was your intent.
I just wish Intellisense would work when it SHOULD in VS2008. MS "workarounds" don't work (deleting .ncb files) most of the time. Oooh,
here's another SO discussion..., let's see what IT has to say (I just love SO)
I'm often annoyed by that too ... but I wonder whether intellisense would actually be able to provide any useful information, in general, within a conditioned-out block?
The problem I see is that if the use of a variable or function changes depending on the value of a preprocessor directive then so may it's definition. If code-browsing features like "go to definition" were active within a conditioned-out block would you want them to lead to the currently-enabled definition or to one that was disabled by the same preprocessor conditions as the disabled code you're looking at?
I think the "princple of least surprise" dictates that the current behaviour is the safest, annoying though it is.
Why you want to do explicitly in the code?
There is already cofiguration setting in VS and the way you can enable and disble the intellisense.
see the link.
http://msdn.microsoft.com/en-us/library/ms173379(VS.80).aspx
http://msdn.microsoft.com/en-us/library/ks1ka3t6(v=VS.80).aspx
This link may help you.