How to deliberately slow down compilation? - c++

There are a lot of questions asking how to speed up compilation of C++ code. I need to do the opposite.
I'm working with a software that monitors compiler invocation in order to do static code analysis. But if compiler process is closed too quickly, monitoring software can miss it. So I need to slow compilation down. I understand that's a terrible solution and hope it will be temporary.
I came up with two solutions:
Disable parallel build, enable preprocessor and compiler listing generation. It works but requires a lot of mouse clicking
Use compiler option to force inclusion of special header file that somehow slows compilation.
Unfortunately I couldn't come up with something simple to write and hard to compile at the same time. Using a lot of #warning seems to work but obviously clutters the output significantly.
I'm using Keil with armcc compiler, so I can use most of C++11 but maximum template recursion depth is just 63.
Preferably this should not produce any overhead for binary size or running time.
UPD: I'll try to clarify this a bit. I know that's a horrible idea, I know that this problem should be solved differently. I will try to solve it differently but I also want to explore this possibility.

Maybe this solution will be slow enough =), something like #NathanOliver propose.
Its compile time table sine I use. It requires extra space, but you can tune it a little (table size and sine accuracy are template parameters of "staticSinus" function, hope you`ll find your best).
https://godbolt.org/z/DYZDF5

You don't want to do anything of the sort. Here are some solutions, of varying degree of kludginess:
Ideal solution: invoke the code analysis from the Makefile.
Replace the compiler with an e.g. Python script that forwards the command-line to the compiler, then triggers the analysis tool.
Monitor make instead of the compiler - it tends to live longer.
Have a tiny wrapper script maintain a reference count in shared memory, and when the reference count is initially incremented, the wrapper should go to sleep for "long enough" after the compiler has finished. Monitor that script.
In a nutshell: the monitoring tool shouldn't be monitoring anything. The code analysis should be invoked from the build tool, i.e. given in the Makefile. If generating the Makefile by hand is too cumbersome, use cmake with ninja, or xmake with no dependencies. You can also generate whatever "project" file the IDE needs to make working on the project easier. But make something else than Keil-specific stuff be the source of truth for the project: it'll make everything go easy from then on.

Related

Is there a continuous compilation plugin for Visual Studio Code like Godbolt's Compiler Explorer?

I wanted something that would compile automatically after a given number of (milli)seconds and display the results in the "Problems" tab.
The current behavior is to statically analyze every file on itself, which typically misses a lot of context (macros primarily) and produces a quasi-infinite array of false positives that make its use typically very difficult, at least on my particular use cases.
I found Matt Ever's Compiler Explorer plugin but it suffers from the original problem: it analizes each source file without context.
I do not really need to see the generated assembly side-by-side, although having that would be fantastic as I could stop using objdump for that purpose on a terminal.
The main requirement here is really to not take my hands off the keyboard to issue a build command - and respective mouse moves etc. I need context-dependent squiggles, tightly integrated with the build system I am currently using.
I am aiming primarily at C++ with CMake so a good solution would take advantage of the output this build system generates. I would not mind if the solution is specific to a single cmake generator as Make or Ninja.
Suggestions and even DIY solutions are very very welcome. Thank you!

gcc compiler optimizations affected code

Unfortunately I am not working with open code right now, so please consider this a question of pure theoretical nature.
The C++ project I am working with seems to be definitely crippled by the following options and at least GCC 4.3 - 4.8 are causing the same problems, didn't notice any trouble with 3.x series (these options might have not been existed or worked differently there), affected are the platforms Linux x86 and Linux ARM. The options itself are automatically set with O1 or O2 level, so I had to find out first what options are causing it:
tree-dominator-opts
tree-dse
tree-fre
tree-pre
gcse
cse-follow-jumps
Its not my own code, but I have to maintain it, so how could I possibly find the sources of the trouble these options are making. Once I disabled the optimizations above with "-fno" the code works.
On a side note, the project does work flawlessly with Visual Studio 2008,2010 and 2013 without any noticeable problems or specific compiler options. Granted, the code is not 100% cross platform, so some parts are Windows/Linux specific but even then I'd like to know what's happening here.
It's no vital question, since I can make the code run flawlessly, but I am still interested how to track down such problems.
So to make it short: How to identify and find the affected code?
I doubt it's a giant GCC bug and maybe there is not even a real fix for the code I am working with, but it's of real interest for me.
I take it that most of these options are eliminations of some kind and I also read the explanations for these, still I have no idea how I would start here.
First of all: try using debugger. If the program crashes, check the backtrace for places to look for the faulty function. If the program misbehaves (wrong outputs), you should be able to tell where it occurs by carefully placing breakpoints.
If it didn't help and the project is small, you could try compiling a subset of your project with the "-fno" options that stop your program from misbehaving. You could brute-force your way to finding the smallest subset of faulty .cpp files and work your way from there. Note: finding a search algorithm with good complexity could save you a lot of time.
If, by any chance, there is a single faulty .cpp file, then you could further factor its contents into several .cpp files to see which functions are the cause of misbehavior.

Cache header files for make/g++

I am working on a large C++ code base and I want to speed up its compilation times. One thing I know is that all of my library includes are on a network drive which slows down things a lot. If I can get make or something else to automatically cache them, either in /tmp, or to a ramdisk, I expect that to improve the compile times for me quite a bit. Is there a way to do that? Of course I can copy them manually and set up a sync job but then every developer will have to do that on every box they ever compile so I am looking for an automated solution.
Thanks!
Of course. There are lots and lots of ways. You can just have a make rule that copies things then have people run make copylocal or whatever. This is fast but people have to remember to do it, and if the libraries change a lot this could be annoying. You can make a rule that copies things then put it as a prerequisite to some other target so it's done on every build, first. This will take longer: will the copy step plus using local copies take longer total time than just using the remote copies? Who knows?
You could also use a tool like ccache to locally cache the results of the compilation, rather than copying the library headers. This will give you a lot more savings for a typical small incremental build and it's easily integrated into the build system, but it will slightly slow down "rebuild the world" steps to cache the objects.
Etc. Your question is too open-ended to be easily answerable.
Avoid using network file systems for code. Use a version control system like git.
You might also consider using ccache.

Trace the compiler to see how much time it spent on certain files

Compiling my project takes ages and I thought I would like to improve the compile time of it. The first thing I am trying to do is to break down the compile time into the individual files.
So that the compiler tells me for example:
boost/variant.hpp: took 100ms in total
myproject/foo.hpp: took 25ms in total
myproject/bar.cpp: took 125ms in total
I could then specifically try to improve the compile time of the files taking up the most time, by introducing forward declaration and/or reordering things so I can omit include files.
Is there something for this task? I am using GCC and ICC (intel c++)
I use Scons as my build system.
The important metric is not how long it takes to process (whatever that means) a header file, but how often the header file changes and forces the build system to reinvoke the compiler on all dependent units.
The time the compiler spends parsing useless code is really small compared to all the other steps of the compilation process. Even if you include entire unneeded files, they're likely hot in disk cache. And precompiled headers make this even better.
The goal is to avoid recompiling units due to unrelated changes in header files. That's where techniques such as pimpl and other compile firewalls come in.
And link-time-code-generation aka whole-program-optimization makes matters worse, by undoing compile-time firewalls and reprocessing the entire program anyway.
Anyway, information on how unstable a header file is should be attainable from build logs, commit logs, even last modified date in the filesystem.
You have an unusual, quirky definition of the time spent processing header files that doesn't match what anyone else uses. So you can probably make this happen, but you'll have to do it yourself. Probably the best way is to run gcc under strace -tt. You can then see when it opens, reads, and closes each file, allowing you to tell how long it processes them.
Have you tried instrumenting the build as a whole yet? Like any performance problem, it's likely that what you think is the problem is not actually the problem. Electric Make is a GNU-make-compatible implementation of make that can produce an XML-annotated build log, which can in turn be used for analysis of build performance issues with ElectricInsight. For example, the "Job Time by Type" report in ElectricInsight can tell you broadly what activities consume the most time in your build, and specifically which jobs in the build are longest. That will help you to direct your efforts to the places where they will be most fruitful.
For example:
Disclaimer: I am the chief architect of Electric Make and ElectricInsight.

Can massive nested loops cause the linker to run endlessly when compiling in Release-Mode?

I'm compiling a very small Win32 command-line application in VS2010 Release-Mode, with all speed optimizations turned on (not memory optimizations).
This application is designed to serve a single purpose - to perform a single pre-defined complex mathematical operation to find a complex solution to a specific problem. The algorithm is completely functional (confirmed) and it compiles and runs fine in Debug-Mode. However, when I compile in Release-Mode (the algorithm is large enough to make use of the optimizations), Link.exe appears to run endlessly, and the code never finishes linking. It sits at 100% CPU usage, with no changes in memory usage (43,232 K).
My application contains only two classes, both of which are pretty short code files. However, the algorithm comprises 20 or so nested loops with inline function calls from within each layer. Is the linker attempting to run though every possible path through these loops? And if so, why is the Debug-Mode linker not having any problems?
This is a tiny command-line application (2KB exe file), and compiling shouldn't take more than a couple minutes. I've waited 30 minutes so far, with no change. I'm thinking about letting it link overnight, but if it really is trying to run through all possible code paths in the algorithm it could end up linking for decades without a SuperComputer handy.
What do I need to do to get the linker out of this endless cycle? Is it possible for such code to create an infinite link-loop without getting a compiler-error prior to the link cycle?
EDIT:
Jerry Coffin pointed out that I should kill the linker and attempt again. I forgot to mention this in the original post, but I have aborted the build, closed and re-opened VS, and attempted to build multiple times. The issue is consistent, but I haven't changed any linker options as of yet.
EDIT2:
I also neglected to mention the fact that I deleted the "Debug" and "Release" folders and re-built from scratch. Same results.
EDIT3:
I just confirmed that turning off function inlining causes the linker to function normally. The problem is I need function inlining, as this is a very performance-sensitive operataion with a minimal memory footprint. This leads me to ask, why would inlining cause such a problem to occur?
EDIT4:
The output that displays during the unending link cycle:
Link:
Generating code
EDIT5:
I confirmed that placing all the code into a single CPP file did not resolve the issue.
Nested loops only affect the linker in terms of Link Time Code Generation. There's tons of options that determine how this works in detail.
For a start I suggest disabling LTCG alltogether to see if there's some other unusual problem.
If it links fine in Release with LTCG disabled you can experiment with inlining limits, intrinsics and optimization level.
Have you done a rebuild? If one of the object files got corrupted, or a .ilk file (though release mode probably isn't using one, I can't be sure about your project settings), then the cleaning pass which rebuild does should cure it.
Closing and re-opening Visual Studio is only helpful in cases when the debugging (or one of the graphical designers) is keeping an open handle to the build product.
Which leads to another suggestion -- check Task Manager and make sure that you don't have a copy of your application still running.
If the linker seems to be the bottle neck and both classes are pretty short, why not put all the code in a single file? Also, there is a multi-processor compilation option in visual studio under C++ general tab in the project settings dialog box. Both of these might help.
Some selected answers:
Debug builds don't inline. Not at all. This makes it possible to put breakpoints in all functions. It wou
It doesn't really matter whether the linker truly runs an infinite loop, or whether it is evaluating 2^20 different combinations of inline/don't inline. The latter still could take 2^20 seconds. (We know the linker is inlining from the "Generating code" message).
Why are you using LTCG anyway? For such a small project, it's entirely feasible to put everything in one .cpp file. Currently, the compiler is just parsing C++, and not generating any code. That also explains why it doesn't have any problem: there's only a single loop in the compiler, over all lines of source code.
Here in my company we discovered something similar.
As far as I know it is no endless loop, just a very long operation.
This means you have to options:
turn off the optimizations
wait until the linker has finished
You have to decide if you really need these optimizations. Here the program is just a little helper, where it does not matter if the program is running 23.4 seconds or 26.8. The gain of program speed compared to the build speed are making the optimizations nearly useless. At least in our special scenario.
I encountered exactly this problem trying to build OSMesa x64 Release mode. My contribution to this discussion is that after letting the linker run overnight, it did complete properly.