Listing all failed targets - build

I'm attempting to build a large legacy code base that has troubles building under new toolchain. In order to speed up fixing problems, I run
make -k
to build everything that can be built, so that I can later focus on unbuildable stuff. But even then a single make takes a minute to figure out the next problem to work on (this code base uses a tangled mess of Makefiles which take ages to parse).
Is there any way to list all targets that failed during a single make -k run?

I'd redirect the make -k output to a file and then look for the error patterns in it. I use vim and I'm typically looking for these:
make:\ \*\*\*
\*\*\*\ \[
A (custom) log parser can be written as well as needed.

When debugging, it is also worth to look out for synchronization irregularities, where part of the stderr message may be missing!

Related

What could be the simplest way to incorporate Windows WPP Software Tracing into SCons builds?

I ask my question in such a specific way because I am afraid that a more generic form could lead to excessively theoretic discussions of how the things should be done best and in the most appropriate way (like a question about pre and post-process actions in SCons).
WPP incorporation actually requires execution of an additional command (commands) before compilation of a file and only even if the build process finds necessity to compile the file without any regard to WPP.
I would remark that this is easily achieved with few lines of definitions in a shared Visual Studio property page file making this work for multiple files in multiple projects, folders, etc. in an absolutely transparent for developers way.
Thus I am wondering whether this can be done in a similarly simple way with SCons? I do not have any deep knowledge of either SCons or MSBuild frameworks; I work with them for simple practical use so I would truly appreciate a practical and useful advise.
Here's what I'd suggest.
SCons builds command lines from Environment() variables.
For example the compile command line for building shared object for c++ is stored in SHCXXCOM (and the variable for what is displayed to user when the command is run defaults to SHCXXCOM, but can be changed by modifying SHCXXCOMSTR).
Back to the problem at hand.
Assuming you have a limited number of build steps you want to wrap, you can do something like.
env['SHCXXCOM'] = [ 'MPP PRE COMMAND LINE', env['SHCXXCOM'], 'MPP POST COMMAND LINE']
You'll have to figure out which variables you need to do this with, but take a look at the manpage to figure that out.
https://scons.org/doc/production/HTML/scons-man.html
p.s. I've not tried this, but in theory it should work. Let us know if not.

Python Sub-Process Coverage

Situation:
I'm attempting to get coverage reports for a project that uses both C++ and Python. I'm using LCOV/GCOV for C++, and attempting to use Coverage.py for the python stuff. The only issue is, most of the python code that's being used is simply utility functions being called one function at a time. No initialization, no real life-cycle, or exit. So no real way to use the API to start/stop/save, or use the coverage command line to measure.
With this, I thought the easiest way to accomplish this would be using the sitecustomize.py method like outlined here. I have gotten that to work, and it measures all configured python code as expected. Now I'm looking at how to accomplish this with compiled python code (.pyc).
I can get it to work if I keep source(.py) and (.pyc) in the same directory when running, and then reporting. However, I'm looking for a way to RUN the files and generate the measurement data. Then at a later time point to the actual source files, and run the actual reports. Ideally I wouldn't need the source(.py) files at all, but I haven't found a way to accomplish this.
Objective:
In the end I want to be able to compile the python files(.pyc), install them on the target, and run coverage like stated above. It will generate coverage data files, then pull those files to my host machine which houses the source(.py) .. and do the actual coverage reporting.
Is this possible currently?
[Edit] Thanks to Ned's advice, I looked into the [paths] usage, and it worked exactly how I needed it to.

LNK2005 error because I have two c++ windows running parallel

I am working on a uni project with multiple tasks, and I am trying to debug a simple program, however I am getting the error "LNK2005 main already defined in task 1".
I realise this is because I have used "int(main)" for both tasks (I have code for task 1 and code for task 2). I do not want to have to create a new project folder for every task. Is there a way around this ?
While it is generally advisable to have a project for each executable you build, you can get away with having a single project for multiple executables if you manage to somehow get rid of the undesired duplicate mains. You have quite a few options available to you:
Have only one main. Have it test its own executable name, and take specific action depending on the name it finds. In the post-build rules, set up rules for creating each (specifically named) executable from your base executable. This allows you to build all your executables at the same time in a fairly efficient manner.
Have multiple mains, but hide them using #ifdefs. Add a #define to the project settings or just somewhere above main(), and compile as needed. This is ok if you don't want to build all your executables all the time.
Just bite the bullet and set up multiple projects.
Whatever you do, consider that being able to build everything you have in a single step is considered a highly desirable trait of build systems, and is usually high on the list of features a properly engineered development process should have.

Exclude all messages in PC-Lint

I am using PC-Lint for my C++ project.
Is there a way to switch off all error and warning messages by default, so I can then reintroduce the required messages explicitly?
I have read the chapter of the PC-Lint manual entitled "Error Inhibition Options" and the best I could find was setting the wLevel to -w0 No messages (except for fatal errors)
Yes, it is possible, you can simply use -e* or -w0. However, the manual truly states (Chapter 16. Living with Lint):
DO NOT simply suppress all warnings with something
like: -e* or -w0 as this can disguise hard errors and make subsequent diagnosis very difficult.
So, yes, you can use it if your code is basically cleaned, and you want to review recent changes for a certain set of messages. But if you want to start cleaning your code, and are swamped with messages because of the default warning level -w3, I suggest to start using -w1, and resolve all issues there; most of the warnings/errors given at level one indicate problems with finding all header files, having al implicit macros set properly, and/or mimicking the compiler you use normally in a sufficiently precise way.
As always, I hesitate to advertise my own work, but if you want, take a look at my "How to wield PC Lint" PDF, where I have documented detailed instructions to handle the initial deployment of PC Lint and tackling the many warnings/errors/infos/notes you may be buried under.
When I started introducing PC-Lint to a new project I did the following:
As suggested by Johan Bezem, ran a -w1 level check over the whole thing. This doesn't actually find any new problems, but checks that your program is syntactically valid and finds any configuration issues. Nothing major, assuming your project compiles already.
Run the test again with -w2 level. This found 53,000 issues, which was a bit much to tackle in one go.
Pick a typical bad file, then suppress any errors that seem
irrelevant or non-urgent (eg. error 525: (Warning -- Negative indentation from line xxx)
adding -e525 to the command line or config file, until you find one that seems serious.
In my case this was
error 442: (Warning -- for clause irregularity: testing direction inconsistent with increment direction), i.e. a 'for' loop
that looked like it should be counting up was actually counting
down.
Reset the test level back to -w1 but added in the critical problem by number, -w1 +e442 in this case. Re-run it over the whole project then fix all the instances of that problem.
Back to stage 2 and try again.
This combination of fixing actual problems and suppressing likely false alarms soon gets your numbers under control.
So that everything gets better over time we also implement a script that does a thorough (full -w2 or -w3) check on any files that are created or modified.
I also found the tool LintProject very helpful as it can do an entire Visual Studio solution in one go, with tables with numbers of errors and worst offenders!

C++ vim IDE. Things you'd need from it

I was going to create the C++ IDE Vim extendable plugin. It is not a problem to make one which will satisfy my own needs.
This plugin was going to work with workspaces, projects and its dependencies.
This is for unix like system with gcc as c++ compiler.
So my question is what is the most important things you'd need from an IDE? Please take in account that this is Vim, where almost all, almost, is possible.
Several questions:
How often do you manage different workspaces with projects inside them and their relationships between them? What is the most annoying things in this process.
Is is necessary to recreate "project" from the Makefile?
Thanks.
Reason to create this plugin:
With a bunch of plugins and self written ones we can simulate most of things. It is ok when we work on a one big "infinitive" project.
Good when we already have a makefile or jam file. Bad when we have to create our owns, mostly by copy and paste existing.
All ctags and cscope related things have to know about list of a real project files. And we create such ones. This <project#get_list_of_files()> and many similar could be a good project api function to cooperate with an existing and the future plugins.
Cooperation with an existing makefiles can help to find out the list of the real project files and the executable name.
With plugin system inside the plugin there can be different project templates.
Above are some reasons why I will start the job. I'd like to hear your one.
There are multiple problems. Most of them are already solved by independent and generic plugins.
Regarding the definition of what is a project.
Given a set of files in a same directory, each file can be the unique file of a project -- I always have a tests/ directory where I host pet projects, or where I test the behaviour of the compiler. On the opposite, the files from a set of directories can be part of a same and very big project.
In the end, what really defines a project is a (leaf) "makefile" -- And why restrict ourselves to makefiles, what about scons, autotools, ant, (b)jam, aap? And BTW, Sun-Makefiles or GNU-Makefiles ?
Moreover, I don't see any point in having vim know the exact files in the current project. And even so, the well known project.vim plugin already does the job. Personally I use a local_vimrc plugin (I'm maintaining one, and I've seen two others on SF). With this plugin, I just have to drop a _vimrc_local.vim file in a directory, and what is defined in it (:mappings, :functions, variables, :commands, :settings, ...) will apply to each file under the directory -- I work on a big project having a dozen of subcomponents, each component live in its own directory, has its own makefile (not even named Makefile, nor with a name of the directory)
Regarding C++ code understanding
Every time we want to do something complex (refactorings like rename-function, rename-variable, generate-switch-from-current-variable-which-is-an-enum, ...), we need vim to have an understanding of C++. Most of the existing plugins rely on ctags. Unfortunately, ctags comprehension of C++ is quite limited -- I have already written a few advanced things, but I'm often stopped by the poor information provided by ctags. cscope is no better. Eventually, I think we will have to integrate an advanced tool like elsa/pork/ionk/deshydrata/....
NB: That's where, now, I concentrate most of my efforts.
Regarding Doxygen
I don't known how difficult it is to jump to the doxygen definition associated to a current token. The first difficulty is to understand what the cursor is on (I guess omnicppcomplete has already done a lot of work in this direction). The second difficulty will be to understand how doxygen generate the page name for each symbol from the code.
Opening vim at the right line of code from a doxygen page should be simple with a greasemonkey plugin.
Regarding the debugger
There is the pyclewn project for those that run vim under linux, and with gdb as debugger. Unfortunately, it does not support other debuggers like dbx.
Responses to other requirements:
When I run or debug my compiled program, I'd like the option of having a dialog pop up which asks me for the command line parameters. It should remember the last 20 or so parameters I used for the project. I do not want to have to edit the project properties for this.
My BuildToolsWrapper plugin has a g:BTW_run_parameters option (easily overridden with project/local_vimrc solutions). Adding a mapping to ask the arguments to use is really simple. (see :h inputdialog())
work with source control system
There already exist several plugins addressing this issue. This has nothing to do with C++, and it must not be addressed by a C++ suite.
debugger
source code navigation tools (now I am using http://www.vim.org/scripts/script.php?script_id=1638 plugin and ctags)
compile lib/project/one source file from ide
navigation by files in project
work with source control system
easy acces to file changes history
rename file/variable/method functions
easy access to c++ help
easy change project settings (Makefiles, jam, etc)
fast autocomplette for paths/variables/methods/parameters
smart identation for new scopes (also it will be good thing if developer will have posibility to setup identation rules)
highlighting incorrect by code convenstion identation (tabs instead spaces, spaces after ";", spaces near "(" or ")", etc)
reformating selected block by convenstion
Things I'd like in an IDE that the ones I use don't provide:
When I run or debug my compiled program, I'd like the option of having a dialog pop up which asks me for the command line parameters. It should remember the last 20 or so parameters I used for the project. I do not want to have to edit the project properties for this.
A "Tools" menu that is configurable on a per-project basis
Ability to rejig the keyboard mappings for every possible command.
Ability to produce lists of project configurations in text form
Intelligent floating (not docked) windows for debugger etc. that pop up only when I need them, stay on top and then disappear when no longer needed.
Built-in code metrics analysis so I get a list of the most complex functions in the project and can click on them to jump to the code
Built-in support for Doxygen or similar so I can click in a Doxygen document and go directly to code. Sjould also reverse navigate from code to Doxygen.
No doubt someone will now say Eclipse can do this or that, but it's too slow and bloated for me.
Adding to Neil's answer:
integration with gdb as in emacs. I know of clewn, but I don't like that I have to restart vim to restart the debugger. With clewn, vim is integrated into the debugger, but not the other way around.
Not sure if you are developing on Windows, but if you are I suggest you check out Viemu. It is a pretty good VIM extension for Visual Studio. I really like Visual Studio as an IDE (although I still think VC6 is hard to beat), so a Vim extension for VS was perfect for me. Features that I would prefer worked better in a Vim IDE are:
The Macro Recording is a bit error prone, especially with indentation. I find I can easily and often record macros in Vim while I am editing code (eg. taking an enum defn from a header and cranking out a corresponding switch statement), but found that Viemu is a bit flakey in that deptartment.
The VIM code completion picks up words in the current buffer where Viemu hooks into the VS code completion stuff. This means if I have just created a method name and I want to ctrl ] to auto complete, Vim will pick it up, but Viemu won't.
For me, it's just down to the necessities
nice integration with ctags, so you can do jump to definition
intelligent completion, that also give you the function prototype
easy way to switch between code and headers
interactive debugging with breaakpoints, but maybe
maybe folding
extra bonus points for refactoring tools like rename or extract method
I'd say stay away from defining projects - just treat the entire file branch as part of the "project" and let users have a settings file to override that default
99% of the difference in speed I see between IDE and vim users is code lookup and navigation. You need to be able to grep your source tree for a phrase (or intelligently look for the right symbol using ctags), show all the hits, and switch to that file in like two or three keystrokes.
All the other crap like repository navigation or interactive debugging is nice, but there are other ways to solve those problems. I'd say drop the interactive debugging even. Just focus on what makes IDEs good editors - have a "big picture" view of your project, instead of single file.
In fact, are there any plugins for vim that already achieve this?