I am using PC-Lint for my C++ project.
Is there a way to switch off all error and warning messages by default, so I can then reintroduce the required messages explicitly?
I have read the chapter of the PC-Lint manual entitled "Error Inhibition Options" and the best I could find was setting the wLevel to -w0 No messages (except for fatal errors)
Yes, it is possible, you can simply use -e* or -w0. However, the manual truly states (Chapter 16. Living with Lint):
DO NOT simply suppress all warnings with something
like: -e* or -w0 as this can disguise hard errors and make subsequent diagnosis very difficult.
So, yes, you can use it if your code is basically cleaned, and you want to review recent changes for a certain set of messages. But if you want to start cleaning your code, and are swamped with messages because of the default warning level -w3, I suggest to start using -w1, and resolve all issues there; most of the warnings/errors given at level one indicate problems with finding all header files, having al implicit macros set properly, and/or mimicking the compiler you use normally in a sufficiently precise way.
As always, I hesitate to advertise my own work, but if you want, take a look at my "How to wield PC Lint" PDF, where I have documented detailed instructions to handle the initial deployment of PC Lint and tackling the many warnings/errors/infos/notes you may be buried under.
When I started introducing PC-Lint to a new project I did the following:
As suggested by Johan Bezem, ran a -w1 level check over the whole thing. This doesn't actually find any new problems, but checks that your program is syntactically valid and finds any configuration issues. Nothing major, assuming your project compiles already.
Run the test again with -w2 level. This found 53,000 issues, which was a bit much to tackle in one go.
Pick a typical bad file, then suppress any errors that seem
irrelevant or non-urgent (eg. error 525: (Warning -- Negative indentation from line xxx)
adding -e525 to the command line or config file, until you find one that seems serious.
In my case this was
error 442: (Warning -- for clause irregularity: testing direction inconsistent with increment direction), i.e. a 'for' loop
that looked like it should be counting up was actually counting
down.
Reset the test level back to -w1 but added in the critical problem by number, -w1 +e442 in this case. Re-run it over the whole project then fix all the instances of that problem.
Back to stage 2 and try again.
This combination of fixing actual problems and suppressing likely false alarms soon gets your numbers under control.
So that everything gets better over time we also implement a script that does a thorough (full -w2 or -w3) check on any files that are created or modified.
I also found the tool LintProject very helpful as it can do an entire Visual Studio solution in one go, with tables with numbers of errors and worst offenders!
Related
(I've looked far and wide but I can't even find anyone having the same problem, not to mention a fix or anything. Closest is this thread which just announces the feature...)
The way it currently works for me, the VS2019 code lens integration of P4VS (for C++ at least) is almost completely pointless. Each function has an indicator added, but the information in each is identical - namely the change history of the entire file:
According to this Microsoft article, I would expect to either get function-level change information that pertains only to that function or a single change summary of the file at the bottom of the editor. But instead I get the worst combination of both.
I'm mainly surprised that I can't find anyone else talking about this, so I assume something is misconfigured on my part. Can't find anything in the configuration options though...
Is this just a bad implementation by Perforce or is something wrong on my end?
I have just found out that it can be turned off by Visual Studio options.
How to turn off CodeLens-References
Text Editor > All Languagues > CodeLens
Every time I run a project in debug mode, I get 4 consecutive breaks at the start that I have to hit continue for each time. They are in the same file at the same point.
It would be great if I could say to simply ignore these since they don't impact functionality.
The obviously better route would be to try to resolve it so it doesn't cause a break/continue prompt but I think that's more effort that it's worth at the moment.
It would be great if the solution related only to my specific environment so I wouldn't have to undo/do it each time I pull/push a build.
Try to use break point condition or break point filer.
I am trying to insert a custom preprocessor into the VC++ 2010 build-pipe after the regular preprocessor has finished, so far i figured that the way of doing this is via MSBuild.
To this point I wasn't able to find out much more, so my questions are:
Is this possible at all?
If so, what do I need to look at, to get going.
If you are talking about the c/c++ preprocessor, then you are probably out of luck. AFAIK, the preprocessor is built into the compiler itself. You can get the compiler to output a pre-processed file and then you MIGHT be able to send that through the compiler a second time to get the final output.
This may not work anyway due to the code being produced, at least in previous versions of cl.exe, doesn't seem to be 100% correct (white space gets mangled slightly which can cause errors).
If you want to take this path, what you'd need to do is have an MSBuild 'Target' that runs before the 'ClCompile' target.
This new target would have to run the 'cl.exe' program with all the settings that 'ClCompile' usually sends it with as well as the '/P' option which will "preprocess to a file".
Then you need to run your tool over the processed file and then finally feed these new files into 'ClCompile'.
If you need any more info just reply in the comments and I'll try to add some when I get the time (this question is rather old, so i'm not sure if it's worthwhile investing more time into this answer).
I'm looking for a good efficient method for scanning a directory structure for changed files in Windows XP+. Something like how git does it is exactly what I'm looking for, when running a git status it displays all modified files, all new (untracked) files and deleted files very quickly which is exactly what I would like to do.
I have a basic model up and running which performs an initial scan and stores all filenames, size, dates and attributes.
On a subsequent scan it checks if the size, attributes or date have changed and marks as a changed file.
My issue now comes in detecting moved and deleted files. Is there a tried and tested method for this sort of thing? I'm struggling to come up with a good method.
I should mention that it will eventually use ReadDirectoryChangesW to monitor files and alert the user when something changes so a full scan is really a last resort after the initial scan.
Thanks,
J
EDIT: I think I may have described the problem badly. The issue I'm facing is not so much detecting the changes - I have ReadDirectoryChangesW() using IOCP on multiple threads to detected when a change happens, the issue is more what to do with the information. For example, a moved file is reported as a delete followed by a create and a rename comes in 2 parts, old name, followed by new name. So what I'm asking is how to differentiate between the delete as part of a move and an actual delete. I'm guessing buffering the changes and processing batches would be an option but feels messy.
In native code FileSystemWatcher is replaced by ReadDirectoryChangesW. Using this properly is not simple, there is a good baseline to build off here.
I have used this code in a previous job and it worked pretty well. The Win32 API itself (and FileSystemWatcher) are prone to problems that are described in the docs and also discussed in various places online, but impact of those will depending on your use cases.
EDIT: the exact change is indicated in the FILE_NOTIFY_INFORMATION structure that you get back - adds, removals, rename data including old and new name.
I voted Liviu M. up. However, another option if you don't want to use the .NET framework for some reason, would be to use the basic Win32 API call FindFirstChangeNotification.
You can use USN journaling if you are up to it, that is pretty low level (NTFS level) stuff.
Here you can find detailed information and source code included. It is written in C# but most of it is PInvoking C/C++ functions.
I have frequently encounter the following debugging scenario:
Tester provide some reproduce steps for a bug. And to find out where the problem is, I try to play with these reproduce steps to get the minimum necessary reproduce steps. Sometimes, luckily I found that when do a minor change to the steps, the problem is gone.
Then the job turns to find the difference in code workflow between these two reproduce steps. This job is tedious and painful especially when you are working on a large code base and it go through a lot code and involve lots of state changes which you are not familiar with.
So I was wondering is there any tools available to compare "code workflow". As I've learned the "wt" command in WinDbg, I thought it might be possible to do it. For example, I can run the "wt" command on some out most functions with 2 different reproduce steps and then compare the difference between outputs. Then it should be easy to found where the code flow starts to diverge.
But the problem with WinDBG is "wt" is quite slow (maybe I should use a log file instead of output to screen) and not very user-friendly (compared with visual studio debugger) ... So I want to ask you guys is there any existing tools available . or is it possible and difficult to develop a "plug-in" for visual studio debugger to support this functionality ?
Thanks
I'd run it under a profiler in "coverage" mode, then use diff on the results to see which parts of the code were executed in one run by not the other.
Sorry, I don't know of a tool which can do what you want, but even if it existed it doesn't sound like the quickest approach to finding out where the lower layer code is failing.
I would recommend to instrument your layer's code with high-level logs so you can know which module fails, stalls, etc. In debug, your logger can write to file, to output debug window, etc.
In general, failing fast and using exceptions are good ways to find out easily where things go bad.
Doing something after the fact is not going to cut it, since your problem is reproducing it.
The issue with bugs is seldom some interal wackiness but usually what the user's actually doing. If you log all the commands that the user enters then they can simply send you the log. You can substitute button clicks, mouse selects, etc. This will have some cost but certainly much less than something that keeps track of every method visited.
I am assuming that if you have a large application that you have good logging or tracing.
I work on a large server product with over 40 processes and over one million lines of code. Most of the time the error in the trace file is enough to identify the location of problem. However sometimes the error I see in the trace file is caused by some earlier code and the reason for this can be hard to spot. Then I use a comparative debugging technique:
Reproduce the first scenario, copy the trace to a new file (if the application is multi threaded ensure you only have the trace for the thread that does the work).
Reproduce the second scenario, copy the trace to a new file.
Remove the timestamps from the log files (I use awk or sed for this).
Compare the log files with winmerge or similar, to see where and how they diverge.
This technique can be a little time consuming, but is much quicker than stepping through thousand of lines in the debugger.
Another useful technique is producing uml sequence diagrams from trace files. For this you need the function entry and exit positions logged consistently. Then write a small script to parse your trace files and use sequence.jar to produce uml diagrams as png files. This is a great way to understand the logic of code you haven't touched in a while. I wrapped a small awk script in a batch file, I just provide trace file and line number to start then it untangles the threads and generates the input text to sequence.jar then runs its to create the uml diagram.