I am writing my first Roslyn analyzers. I have basically followed the tutorial https://learn.microsoft.com/en-us/archive/msdn-magazine/2014/special-issue/csharp-and-visual-basic-use-roslyn-to-write-a-live-code-analyzer-for-your-api , and then proceeded by adding a second analyzer class which should be language-agnostics analysis, similar to what presented in https://www.meziantou.net/writing-a-language-agnostic-roslyn-analyzer-using-ioperation.htm .
So, I have one analyzer class which initializes itself with
context.RegisterSymbolAction(AnalyzeSymbol, SymbolKind.NamedType);
and a second one which uses
context.RegisterOperationAction(AnalyzeConversionOperation, OperationKind.Conversion);
context.RegisterOperationAction(AnalyzeInvocationOperation, OperationKind.Invocation);
To test my analyzers, I have the generated .Vsix project set as startup project, and by pressing F5, it gets me to a separate instance of Visual Studio where I write some code and want to see whether my analyzers work as intended. And, I set breakpoints at the begininng of my analyzer actions - AnalyzeSymbol, AnalyzeConversionOperation and AnalyzeInvocationOperation, in the original VS instance.
The analyzer actions are not called as I would expect them too. In fact, they are not being called at all, as I write the code to be analyzed. Only if I place the cursor on some type name (which is related to the SymbolKind.NamedType action), and I click on the lightbulb that appears, I get the calls - and not only to the AnalyzeSymbol action, but also the AnalyzeConversionOperation and AnalyzeInvocationOperation are called as needed for various operations in the code.
So, the actions that get called are all fine - but they not called when I want them. I would expect them to be called as needed - basically, almost continuously as I edit the code. Or at least upon build, or some explicit "Analyze Now" command. But I am not aware of anything alike. The only way I found to trigger them is the way I described. That does not seem correct to me.
I tried Googling but could not find the solution; or possibly I have some misconception and it is supposed to work differently?
Analyzers are run out of process. So I think it's expected that you don't get the breakpoints hit, but in fact these lines get hit.
You have three options:
Disable running analyzers out-of-process.
Attach ServiceHub.RoslynCodeAnalysisService process to the debugger.
Debug through a unit test. This is my preferred approach.
Related
Every time I run a project in debug mode, I get 4 consecutive breaks at the start that I have to hit continue for each time. They are in the same file at the same point.
It would be great if I could say to simply ignore these since they don't impact functionality.
The obviously better route would be to try to resolve it so it doesn't cause a break/continue prompt but I think that's more effort that it's worth at the moment.
It would be great if the solution related only to my specific environment so I wouldn't have to undo/do it each time I pull/push a build.
Try to use break point condition or break point filer.
I am using PC-Lint for my C++ project.
Is there a way to switch off all error and warning messages by default, so I can then reintroduce the required messages explicitly?
I have read the chapter of the PC-Lint manual entitled "Error Inhibition Options" and the best I could find was setting the wLevel to -w0 No messages (except for fatal errors)
Yes, it is possible, you can simply use -e* or -w0. However, the manual truly states (Chapter 16. Living with Lint):
DO NOT simply suppress all warnings with something
like: -e* or -w0 as this can disguise hard errors and make subsequent diagnosis very difficult.
So, yes, you can use it if your code is basically cleaned, and you want to review recent changes for a certain set of messages. But if you want to start cleaning your code, and are swamped with messages because of the default warning level -w3, I suggest to start using -w1, and resolve all issues there; most of the warnings/errors given at level one indicate problems with finding all header files, having al implicit macros set properly, and/or mimicking the compiler you use normally in a sufficiently precise way.
As always, I hesitate to advertise my own work, but if you want, take a look at my "How to wield PC Lint" PDF, where I have documented detailed instructions to handle the initial deployment of PC Lint and tackling the many warnings/errors/infos/notes you may be buried under.
When I started introducing PC-Lint to a new project I did the following:
As suggested by Johan Bezem, ran a -w1 level check over the whole thing. This doesn't actually find any new problems, but checks that your program is syntactically valid and finds any configuration issues. Nothing major, assuming your project compiles already.
Run the test again with -w2 level. This found 53,000 issues, which was a bit much to tackle in one go.
Pick a typical bad file, then suppress any errors that seem
irrelevant or non-urgent (eg. error 525: (Warning -- Negative indentation from line xxx)
adding -e525 to the command line or config file, until you find one that seems serious.
In my case this was
error 442: (Warning -- for clause irregularity: testing direction inconsistent with increment direction), i.e. a 'for' loop
that looked like it should be counting up was actually counting
down.
Reset the test level back to -w1 but added in the critical problem by number, -w1 +e442 in this case. Re-run it over the whole project then fix all the instances of that problem.
Back to stage 2 and try again.
This combination of fixing actual problems and suppressing likely false alarms soon gets your numbers under control.
So that everything gets better over time we also implement a script that does a thorough (full -w2 or -w3) check on any files that are created or modified.
I also found the tool LintProject very helpful as it can do an entire Visual Studio solution in one go, with tables with numbers of errors and worst offenders!
I am trying to use gdb with MySQL source code which is written in C/C++. In mysql-test/t, I create a custom test case file, say, example.test and then debug it by using the following line of code
/mysql-test-run --gdb example
Now I want to see the flow of execution as it changes from one function in a file to another in some different file. I am not sure of how the execution changes, so I can't pre define the break points. Any solution to how I can get to see the flow with multiple files of source code?
You can use the next directive to take line-by-line steps through the source. When appropriate, you can use the step directive to take steps "into" the function(s) being called on the current line.
A reasonable method would be to do next until you think you only just pass the externally visible behavior you're looking for. Then start over, stopping at the line right before you saw the behavior last time. Then step this time. Repeat as necessary until you find the code you're looking for. If you believe that it's encountering some sort of deadlock, it's significantly easier -- just interrupt (Ctrl-C) the program when you think it's stuck and it should stop right in the interesting spot.
In general, walking through the source you'll build up some places you think are interesting. You can note the source file and line number and/or function names as appropriate and set those breakpoints directly in the future to avoid tedious next/next/next business.
I am involved in a c++ refactoring project and sometimes there are differences resulting, when there should be none. Currently, what I do is basically setting a breakpoint at some place, and then go through the program by F10/F11. The first problem is the size of the projects, traversing it takes a lot of time. Second, sometimes I have differences only in the end of a very big test sentences (say, 600 words), thus just getting to the different word is painfully slow.
1. Is it possible to write some kind of macro for Visual Studio, which will start from the breakpoint, then go step-by-step through the program until end while printing some fields?
2. Are there any neat tricks or tools to simplify the task?
Thanks!
You can create Macros by using Tools>Macros>Macro IDE
If prefer the following method because it's faster for me.
You can record macros using Tools>Macros>Record temporary macro
Everything you type will then be recorded into a macro.
After you recorded what you want to be automated, you can edit the generated code by using View>Other windows>Macro Explorer. Your macro will be recorded in MyMacros>RecordingModule>TemporaryMacro in Macro Explorer window. If you right click that and select edit.
One way to test if the program is terminated:
While Not DTE.Debugger.CurrentProgram Is Nothing
I have frequently encounter the following debugging scenario:
Tester provide some reproduce steps for a bug. And to find out where the problem is, I try to play with these reproduce steps to get the minimum necessary reproduce steps. Sometimes, luckily I found that when do a minor change to the steps, the problem is gone.
Then the job turns to find the difference in code workflow between these two reproduce steps. This job is tedious and painful especially when you are working on a large code base and it go through a lot code and involve lots of state changes which you are not familiar with.
So I was wondering is there any tools available to compare "code workflow". As I've learned the "wt" command in WinDbg, I thought it might be possible to do it. For example, I can run the "wt" command on some out most functions with 2 different reproduce steps and then compare the difference between outputs. Then it should be easy to found where the code flow starts to diverge.
But the problem with WinDBG is "wt" is quite slow (maybe I should use a log file instead of output to screen) and not very user-friendly (compared with visual studio debugger) ... So I want to ask you guys is there any existing tools available . or is it possible and difficult to develop a "plug-in" for visual studio debugger to support this functionality ?
Thanks
I'd run it under a profiler in "coverage" mode, then use diff on the results to see which parts of the code were executed in one run by not the other.
Sorry, I don't know of a tool which can do what you want, but even if it existed it doesn't sound like the quickest approach to finding out where the lower layer code is failing.
I would recommend to instrument your layer's code with high-level logs so you can know which module fails, stalls, etc. In debug, your logger can write to file, to output debug window, etc.
In general, failing fast and using exceptions are good ways to find out easily where things go bad.
Doing something after the fact is not going to cut it, since your problem is reproducing it.
The issue with bugs is seldom some interal wackiness but usually what the user's actually doing. If you log all the commands that the user enters then they can simply send you the log. You can substitute button clicks, mouse selects, etc. This will have some cost but certainly much less than something that keeps track of every method visited.
I am assuming that if you have a large application that you have good logging or tracing.
I work on a large server product with over 40 processes and over one million lines of code. Most of the time the error in the trace file is enough to identify the location of problem. However sometimes the error I see in the trace file is caused by some earlier code and the reason for this can be hard to spot. Then I use a comparative debugging technique:
Reproduce the first scenario, copy the trace to a new file (if the application is multi threaded ensure you only have the trace for the thread that does the work).
Reproduce the second scenario, copy the trace to a new file.
Remove the timestamps from the log files (I use awk or sed for this).
Compare the log files with winmerge or similar, to see where and how they diverge.
This technique can be a little time consuming, but is much quicker than stepping through thousand of lines in the debugger.
Another useful technique is producing uml sequence diagrams from trace files. For this you need the function entry and exit positions logged consistently. Then write a small script to parse your trace files and use sequence.jar to produce uml diagrams as png files. This is a great way to understand the logic of code you haven't touched in a while. I wrapped a small awk script in a batch file, I just provide trace file and line number to start then it untangles the threads and generates the input text to sequence.jar then runs its to create the uml diagram.