Can I speed up IWYU with CCache? - build

I'm developing a project which uses CMake for build system generation, CCache for speeding up the build (set via CMAKE_CXX_COMPILER_LAUNCHER), and Include What You Use for tidying up the required headers (set via CMAKE_CXX_INCLUDE_WHAT_YOU_USE).
IWYU takes quite some time to analyze all the project files which makes the build unnecessarily long.
Is there a way to cache its results using CCache?
IWYU returns with exit code 1 if there were any suggestions and 2 if there were none, so I tried creating a wrapper that returns 0 instead of 2 because CCache requires this to cache results, but it didn't help much. Now it complains that no output files were created. Is this the right way to go?

Related

Buildroot does not re-build modified files

i am using external tree building for a linux kernel driver developement. I created a Config.in and a kernel_driver.mk, and everything seemed to work fine, while I still had compilation errors.
I iterated over and over "make driver-dirclean / make driver".
However, i have the impression (and now I verified this fact) that, once i could successfully build correctly once, then BR would NOT rebuild the files aagain, even if i introduce purposely errors in those files !!!
also the output shows:
CC [M] /home/adva/work/sfpnid-drv/buildroot/output/build/hal-1.0.0/kernel/helper.o
LD [M] /home/adva/work/sfpnid-drv/buildroot/output/build/hal-1.0.0/kernel/spidev.o
Building modules, stage 2.
MODPOST 1 modules
CC [M] /home/adva/work/sfpnid-drv/buildroot/output/build/hal-1.0.0/kernel/spidev.mod.o
LD [M] /home/adva/work/sfpnid-drv/buildroot/output/build/hal-1.0.0/kernel/spidev.ko
the compilation never fails where it should !
Is there a cache issue somehow ? rsync effect ?
Thanks,
Jacques
finally i traced down the source of the problem.
As I am using an off-tree build, I had a "make clean" issue. I realise that the dependencies are solved in the BR/output/build/.. environement, while my files are edited off tree.
So what happens is, once it has built successfully 1 time, then the .o are present in BR. The dependencies are satisfied, the "make -rebuild would just re-link the old .o.
I need to do a "make -dirclean" so the new files (with errors) are rsynced.
This is at least the way i understood it.
So for me, this is kinda solved. The fact is, BR is really an integration tool, and less a developement environement. So this behavior is understandable.
Jacques

GNU make - accelerate non-parallel makefile without modification

I have a project consisting of a set of makefiles that CANNOT be run with make --jobs=N because the dependencies are not specified tightly enough for make to correctly execute the recipes in correct order (ie I get race conditions).
I am currently using Huddle, by Electric-Cloud.com, and it does exactly what I need: it parses the makefile and then executes the jobs in parallel and accounts for the unspecified dependencies.
Question: is there a free or free-er thing that does this?
Yes I know I could re-write the makefiles but project management says "no way".
UPDATE #1
I understand that I'll have to do some work to get functionality similar to Electric-Cloud's functionality.
I know that Electric-Cloud parses the makefile(s) to find the dependencies so wouldn't the same thing be accomplished using makedepend?
I'm thinking:
Run makedepend on existing makefiles
Feed in the output using include <makedepend.output>
make all --jobs=64
UPDATE 2
Turns out makedepend is specific to C/C++: it merely runs the pre-processor on source files and parses any #include statements; not what I need.
I need what this guy is asking for:
Build a makefile dependency / inheritance tree
UPDATE 3
The makefile "dependency graph generator" actually already exists
http://plindenbaum.blogspot.com/2012/11/visualizing-dependencies-of-makefile.html?m=1
but that's not going to help me.
Many of my recipes create directories which are used by other targets' recipes, effectively making them implicit prerequisites.
The graph dependency tool at above URL works by parsing the build log's statements but those statements don't indicate the implicit dependencies.
Even if I try to run my makefile with --dry-run, the build fails because some of the recipes that aren't executed - cause it's a dry run - create directories that other invocations of make need simply to 'pretend execute' a recipe.
UPDATE 4
Electric-Cloud has made Huddle - 4 local cores, non-clustered - free for anyone forever.
Furthermore, they output an .xml file that lists each job's dependencies so I can use it to fix my makefiles compatible so they're compatible with the --jobs option.
I am currently using Huddle, by Electric-Cloud.com, and it does exactly what I need: it parses the makefile and then executes the jobs in parallel and accounts for the unspecified dependencies.
I actually don't know about these tools, but can't you provide them with a super makefile under your control, that clarifies the inner dependencies of the various targets?
You probably just have to add some indirection level for these (imported?) projects directory structure and another Makefile.

libcurl prefered build method

I'm building libcurl to use with a project i'm working on, and after reading a little on how to build it properly i've found 2 ways to do it.
Method 1:
(edited Makefile in root directory to change "VC=vc6" to "VC=vc10")
C:\dev\curl-7.25.0>set ZLIB_PATH=C:\dev\zlib-1.2.6
C:\dev\curl-7.25.0>nmake vc-zlib
Method 2:
(Put necessary files for zlib in ../deps & edit Makefile to make USE_IDN=no actually work)
C:\dev\curl-7.25.0\winbuild>nmake /f Makefile.vc mode=static VC=10 WITH_ZLIB=sta
tic DEBUG=no USE_IDN=no WITH_DEVEL=../deps
Both work with no errors.
The scary part is, the outputted libcurl.lib files are different sizes.
So is there any libcurl gurus out there that can tell me the difference between these 2 build methods, and which is recommended to be used?
So what I've figured out so far is method 1 requires you to link your application with Ws2_32.lib and Wldap32.lib, where method 2 doesn't (probably the reason for the additional size).
Also method 1 has a slightly smaller output executable.
I'm really curious if there are any other differences though.

Waf throwing errors on c++ builds

Our project contains a lot of c++ sources, up until now we were sing make to build everything, however this takes ages. So I stumbled upon waf, which works quite well and speeds up the build a lot. However everytime I do a full build I end up with a couple of build errors that make no sense. If I do an incremental build now, most of the time some of the sources that could not be build the first time around are build now, some others still fail. On another incremental build I will finally get a successful build.
I have tried building the separate libraries in separate steps, just in case any dependent libraries are attempted to build in parallel, but the errors still appear.
EDIT: The errors I keep getting do not seem to have anything to do with my code, e.g.
Build failed
-> task failed (exit status -1):
{task 10777520: c constr_SET.c -> constr_SET.c.1.o}
After another "waf build" I do not get this error anymore.
EDIT2: The build step for my libraries looks like this:
def build(bld):
bld.shlib(source="foo.cpp bar.cpp foobar.cpp constr_SET.c",
target="foobar",
includes= "../ifinc",
name="foobar",
use="MAIN RW HEADERS",
install_path = "lib/")
MAIN, RW, HEADERS are just some flags and external libraries we use.
Has anyone seen similar behaviour on their system? Or even a solution?
I'm suspecting multiple targets are building the same required object in parallel. Try
export JOBS=1
or
waf --jobs 1

Determine list of source files (*.[ch]) for a complex build with scons

Suppose you have a complex source tree for a C project, lots of directories with lots of files. The scons build supports multiple targets (i386, sparc, powerpc) and multiple variants (debug, release). There's an sconstruct at the root (referencing various sconscripts) that does the right thing for all of these, when called with arguments specifying target and variant, e.g. scons target=i386 variant=release.
Is there an easy way to determine which source files (*.c and *.h) each of these builds will use (they are all slightly different)? My theory is that scons needs to compute this file set anyway to know which files to compile and when to recompile. Can it provide this information?
What I do not want to do:
Log a verbose build and postprocess it (probably wouldn't tell *.h files anyway)
find . -name '*.[ch]' also prints unwanted files for unit testing and other cruft and is not target specific
Ideally I would like to do scons target=i386 variant=release printfileset and see the proper list of *.[ch] files. This list could then serve as input for further source file munging tools like doxygen.
There are a few questions all squashed together here:
You can prevent SCons from running the compiler using the --dry-run flag
You can get a dependency tree from SCons by using --debug=tree, or --tree=all flags, depending on which version you are running
Given a list of files, one per line, you can use grep to filter out only the things that are interesting for you.
When you put all of that together you end up with something like:
scons target=i386 variant=release printfileset -n --tree=all | egrep -i '^ .*\.(c|h|cpp|cxx|hpp|inl)$'