What are the pros + cons of Link-Time Code Generation? (VS 2005) - c++

I've heard that enabling Link-Time Code Generation (the /LTCG switch) can be a major optimization for large projects with lots of libraries to link together. My team is using it in the Release configuration of our solution, but the long compile-time is a real drag. One change to one file that no other file depends on triggers another 45 seconds of "Generating code...". Release is certainly much faster than Debug, but we might achieve the same speed-up by disabling LTCG and just leaving /O2 on.
Is it worth it to leave /LTCG enabled?

It is hard to say, because that depends mostly on your project - and of course the quality of the LTCG provided by VS2005 (which I don't have enough experience with to judge). In the end, you'll have to measure.
However, I wonder why you have that much problems with the extra duration of the release build. You should only hand out reproducible, stable, versioned binaries that have reproducible or archived sources. I've rarely seen a reason for frequent, incremental release builds.
The recommended setup for a team is this:
Developers typically create only incremental debug builds on their machines. Building a release should be a complete build from source control to redistributable (binaries or even setup), with a new version number and labeling/archiving the sources. Only these should be given to in-house testers / clients.
Ideally, you would move the complete build to a separate machine, or maybe a virtual machine on a good PC. This gives you a stable environment for your builds (includes, 3rd party libraries, environment variables, etc.).
Ideally, these builds should be automated ("one click from source control to setup"), and should run daily.

It allows the linker to do the actual compilation of the code, and therefore it can do more optimization such as inlining.
If you don't use LTCG, the compiler is the only component in the build process that can inline a function, as in replace a "call" to a function with the contents of the function, which is usually a lot faster. The compiler would only do so anyway for functions where this yields an improvement.
It can therefore only do so with functions that it has the body of. This means that if a function in the cpp file calls another function which is not implemented in the same cpp file (or in a header file that is included) then it doesn't have the actual body of the function and can therefore not inline it.
But if you use LTCG, it's the linker that does the inlining, and it has all the functions in all the of the cpp files of the entire project, minus referenced lib files that were not built with LTCG. This gives the linker (which becomes the compiler) a lot more to work with.
But it also makes your build take longer, especially when doing incremental changes. You might want to turn on LTCG in your release build configuration.
Note that LTCG is not the same as profile-guided optimization.

I know the guys at Bungie used it for Halo3, the only con they mentioned was that it sometimes messed up their deterministic replay data.
Have you profiled your code and determined the need for this? We actually run our servers almost entirely in debug mode, but special-case a few files that profiled as performance critical. That's worked great, and has kept things debuggable when there are problems.
Not sure what kind of app you're making, but breaking up data structures to correspond to the way they were processed in code (for better cache coherency) was a much bigger win for us.

I've found the downsides are longer compile times and that the .obj files made in that mode (LTCG turned on) can be really massive. For example, .obj files that might be 200-500k were about 2-3mb. It just to happened to me that compiling a bunch of projects in my chain led to a 2 gb folder, the bulk of which was .obj files.

I also don't see problems with extra compilation time using link-time code generation with the release build. I only build my release version once per day (overnight), and use the unit-test and debug builds during the day.

Related

What steps can be taken to reduce link time? [duplicate]

We have fairly large C++ application which is composed of about 60 projects in Visual Studio 2005. It currently takes 7 minutes to link in Release mode and I would like to try to reduce the time. Are there any tips for improving the link time?
Most of the projects compile to static libraries, this makes testing easier since each one also has a set of associated unit tests. It seems the use of static libraries prevents VS2005 from using incremental linking, so even with incremental linking turned on it does a full link every time.
Would using DLLs for the sub projects make any difference? I don't really want to go through all the headers and add macros to export the symbols (even using a script) but if it would do something to reduce the 7 minute link time I will certainly consider it.
For some reason using nmake from the command line is slightly faster and linking the same application on Linux (with GCC) is much faster.
Visual Studio IDE 7 minutes
Visual C++ using nmake from the command line - 5 minutes
GCC on Linux 34 seconds
If you're using the /GL flag to enable Whole Program Optimization (WPO) or the /LTCG flag to enable Link Time Code Generation, turning them off will improve link times significantly, at the expense of some optimizations.
Also, if you're using the /Z7 flag to put debug symbols in the .obj files, your static libraries are probably huge. Using /Zi to create separate .pdb files might help if it prevents the linker from reading all of the debug symbols from disk. I'm not sure if it actually does help because I have not benchmarked it.
See my suggestion made at Microsoft : https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=511300
You should vote for it ! Here is my last comment on it :
Yes we are using incremental linking to build most of our projects. For the biggest projects, it's useless. In fact, it takes more time to link those projects with incremental linking (2min50 compared to 2min44). We observed that it doesn't work when size of ILK files is big (our biggest project generate an ilk of 262144 KB in win 32).
Bellow, I list others things we tried to reduce link time:
Explicit template instantiation to reduce code bloat. Small gain.
IncrediLink (IncrediBuild give interesting gain for compilation but almost no gain for link).
Remove debug information for libraries who are rarely debugged (good gain).
Delete PDB file in « Pre-Build Event » (strangely it give interesting gain ex: 2min44 instead of 3min34).
Convert many statics library to DLL. Important gain.
Working with computer equiped with lot of RAM in order to maximize disk cache. The biggest gain.
Big obj versus small obj. No difference.
Change project options (/Ob1, /INCREMENTAL, Enable COMDAT folding, Embedding manifest, etc.). Some give interesting gain other not. We try to continuously maximize our settings.
Maximize Internal linkage vs External linkage. It's a good programming practice.
Separate software component as much as we can afford. You can than work in unit test that link fast. But we still have to interate things together, we have legacy code and we worked with third party component.
Use secret linker switch /expectedoutputsize:120000000. Small gain.
Note that for all our experimentation, we meticulously measured link time. Slow link time seriously cost in productivity. When you implement complex algorithm or track difficult bug, you want to iterate rapidly this sequence : modify some code, link, trace debug, modify some code, link, etc...
Another point to optimize link time is the impact it have on our continuous integration cycle. We have many applications that shared common code and we are running continuous integration on it. Link time of all our applications took half the cycle time (15 minutes)...
In thread https://blogs.msdn.microsoft.com/vcblog/2009/09/10/linker-throughput/, some interesting suggestions were made to improve link time. On a 64 bits computer, why not offering an option to work with file completely in RAM ?
Again, any suggestions that may help us reduce link time is welcome.
Generally, using DLLs instead of static libraries will improve linking times quite a bit.
Take a look at Incredibuild by Xoreax. Its distributed compilation dramatically reduced our full build/link times from around 40 minutes to 8 minutes.
Additionally, this product has a feature they call Incredilink which should help you get incremental links working even with statically linked libraries.
I don't think converting to DLLs would be useful.
You could try looking for options to do with optimisation, and turning them off. The linker might be spending a long time looking over the libs for redundant code it can eliminate. Your app may end up bigger or slower, but that may not be a problem to you.
Several people have reported (and I myself have noticed) that modifying a file in a statically linked library will disable incremental linking for the entire solution; this appears to be what you are seeing. See comments here and here for some information about that.
One workaround is to use the Fast Solution Build Add-In. This might involve making a few changes to your workspace, but the payoff is definitely worth it. For a commercial solution, use Xoreax's Incredibuild, which basically incorporates this same technology but adds other features as well. I apologize if I sound like a salesman for Incredibuild - I'm just a very satisfied customer.
I've had similar troubles linking large apps with Visual C++ before. In my case, I simply didn't have enough free RAM and excessive paging to disk was slowing the linking process to a halt. Doubling my RAM from 1GB to 2GB made a dramatic improvement. How much is your dev box running?
I just found out that we had by accident defined a large table of strings in a header file which got included in pretty much every (static) lib. (I am talking about a huge C++ project.) When the linker created the EXE, it looks like the unification of the table (there is only a single one ending up in the EXE) or the parsing of the libs took forever. Putting the table in a separate C++ file took a couple of minutes of the link on a relatively slow machine.
Unfortunately, I don't know how to find stuff like that other than by chance.
For debug builds, then one can use incremental linking, which can improve link times a lot.
Sadly enough there are certain pitfalls, and VS2005 will not warn you.
If using static libraries then incremental linking will not work if modifying a file part for the static library. The solution is to set the linker option "Use Library Dependency Inputs" to "Yes" (This is the same as Fast Solution Build in VS2003)
If using pragma-comment-lib to include the lib of a DLL, and specifies a relative path instead of the lib alone, then incremental linking will stop working. The solution is to specify the lib alone, and use the linker-option LIBPATH to add additional lib-path.
Some times the .ilk file will become corrupted (grow beyond 200 MByte) and then suddenly the incremental linker til take more than 10 times the normal time. Some times it will complain about the .ilk file being corrupt, but usually first after several minutes. The solution for me was to setup the following command for the "Build Event" -> "Pre-Link Event"
for %%f in ($(IntDir)*.ilk) do ( if "%%~zf" GTR "200000000" (del %%f))
60 libs to link does sound like a fair few. This may be a bit of an extreme measure, but it might radically speed things up. Create a new solution, with a few projects, and add all the source from your existing projects to these. Then build and link them instead, and just keep the small ones for testing.
Get a quicker computer with multiple processors and enable parallel builds (this might be on by default). To allow the greatest amount of parallism, make sure your project dependencies are correct and you haven't got unnecessary dependencies.
If you are truly talking about link times, then things like fast solution build and Xoreax won't really help much (except for Incredilink, which might). Assuming that you are truly measuring link start to link end, then I would suggest that the number of libs that you have is the issue.
The link phase is, at least initially, IO bound in loading up all of the object and lib files. You might be in a situation where you have 60 libraries along with the main project of some large number of .obj files. I suspect that you simply might be seeing, at least in part, typical windows slowness in loading up all of those libs and .obj files.
You can easily test this. Take all of those lib files and build one single lib file just as a test. Instead of linking with 60 of them, link with one and see where your time goes. That would be interesting.
NTFS is notoriosly slow. It shoudln't be 7m vs. 32 seconds on Linux slow, but it might be part of the issue. Using DLL's will help but you will suffer application startup time, although that will not be early as bad. I would be confident that you won't have 7m application start up times.
you can try looking at this: http://msdn.microsoft.com/en-us/library/9h3z1a69.aspx
Basically, you can run project builds in parallel if you have several cores.
I had solved my link problem and share to all of you.
My project's link time was 7 min with /Incremental:no linking (link time 7min).
Was 15 min with /Incremental, (link time 7min, embeded manifest time 7min). So I turn off the inremental.
I found Additional dependencies has a.lib AND ignore specific libraries has, too!
So I remove it from Ignore specific libraries turn on the /incremental. first link time need 5min but embeded manifest time has none.
I don't know why, but the incremental linking has worked.
I rollback all project code , so I could find the problem by the lib.
If you do all of above, you can try my method. Good luck!
Step 1 in C++ build time reduction is more memory. After switching from 4GB to 12GB, I saw my link-all-projects time fall off a cliff: from 5:50 to 1:15.

Safe Unity Builds

I work on a large C++ project which makes use of Unity builds. For those unfamiliar with the practice, Unity builds #include multiple related C++ implementation files into one large translation unit which is then compiled as one. This saves recompiling headers, reduces link times, improves executable size/performance by bring more functions into internal linkage, etc. Generally good stuff.
However, I just caught a latent bug in one of our builds. An implementation file used library without including the associated header file, yet it compiled and ran. After scratching my head for bit, I realized that it was being included by an implementation file included before this one in our unity build. No harm done here, but it could have been a perplexing surprise if someone had tried to reuse that file later independently.
Is there any way to catch these silent dependencies and still keep the benefits of Unity builds besides periodically building the non-Unity version?
I've used UB approaches before for frozen projects in our source repository that we never planned to maintain again. Unfortunately, I'm pretty sure the answer to your question is no. You have to periodically build all the cpp files separately if you want to test for those kinds of errors.
Probably the closest thing you can get to an automagic solution is a buildbot which automatically gathers all the cpp files in a project (with the exception of your UB files) and builds the regular way periodically from the source repository, pointing out any build errors in the meantime. This way your local development is still fast (using UBs), but you can still catch any errors you miss from using a unity build from these periodic buildbot builds which build all cpps separately.
I suggest not to use Unity Build for your local development environment. Unity Build won't help you improve compile time when you edit and compile anyway. Use Unity Build only for your non-incremental continuous build system, which you are not expect to use the production from.
As long as every one commit changes after they have compiled locally, the situation you described should not appear.
And Unity Build might form unexpected overload calls between locally defined functions happen to use with duplicated names. That's dangerous and you are not aware of that, i.e. no compile error or warning might generated in such case. Unless you have a way to prevent that, please do not reply on the production generated by Unity Build.

VS 2008 C++ build output?

Why when I watch the build output from a VC++ project in VS do I see:
1>Compiling...
1>a.cpp
1>b.cpp
1>c.cpp
1>d.cpp
1>e.cpp
[etc...]
1>Generating code...
1>x.cpp
1>y.cpp
[etc...]
The output looks as though several compilation units are being handled before any code is generated. Is this really going on? I'm trying to improve build times, and by using pre-compiled headers, I've gotten great speedups for each ".cpp" file, but there is a relatively long pause during the "Generating Code..." message. I do not have "Whole Program Optimization" nor "Link Time Code Generation" turned on. If this is the case, then why? Why doesn't VC++ compile each ".cpp" individually (which would include the code generation phase)? If this isn't just an illusion of the output, is there cross-compilation-unit optimization potentially going on here? There don't appear to be any compiler options to control that behavior (I know about WPO and LTCG, as mentioned above).
EDIT:
The build log just shows the ".obj" files in the output directory, one per line. There is no indication of "Compiling..." vs. "Generating code..." steps.
EDIT:
I have confirmed that this behavior has nothing to do with the "maximum number of parallel project builds" setting in Tools -> Options -> Projects and Solutions -> Build and Run. Nor is it related to the MSBuild project build output verbosity setting. Indeed if I cancel the build before the "Generating code..." step, none of the ".obj" files will exist for the most recent set of "compiled" files. This implies that the compiler truly is handling multiple translation units together. Why is this?
Compiler architecture
The compiler is not generating code from the source directly, it first compiles it into an intermediate form (see compiler front-end) and then generates the code from the intermediate form, including any optimizations (see compiler back-end).
Visual Studio compiler process spawning
In a Visual Studio build compiler process (cl.exe) is executed to compile multiple source files sharing the same command line options in one command. The compiler first performs "compilation" sequentially for each file (this is most likely front-end), but "Generating code" (probably back-end) is done together for all files once compilation is done with them.
You can confirm this by watching cl.exe with Process Explorer.
Why code generation for multiple files at once
My guess is Code generation being done for multiple files at once is done to make the build process faster, as it includes some things which can be done only once for multiple sources, like instantiating templates - it has no use to instantiate them multiple times, as all instances but one would be discarded anyway.
Whole program optimization
In theory it would be possible to perform some cross-compilation-unit optimization as well at this point, but it is not done - no such optimizations are ever done unless enabled with /LTCG, and with LTCG the whole Code generation is done for the whole program at once (hence the Whole Program Optimization name).
Note: it seems as if WPO is done by linker, as it produces exe from obj files, but this a kind of illusion - the obj files are not real object files, they contain the intermediate representation, and the "linker" is not a real linker, as it is not only linking the existing code, it is generating and optimizing the code as well.
It is neither parallelization nor code optimization.
The long "Generating Code..." phase for multiple source files goes back to VC6. It occurs independent of optimizations settings or available CPUs, even in debug builds with optimizations disabled.
I haven't analyzed in detail, but my observations are: They occur when switching between units with different compile options, or when certain amounts of code has passed the "file-by-file" part. It's also the stage where most compiler crashes occured in VC6 .
Speculation: I've always assumed that it's the "hard part" that is improved by processing multiple items at once, maybe just the code and data loaded in cache. Another possibility is that the single step phase eats memory like crazy and "Generating code" releases that.
To improve build performance:
Buy the best machine you can afford
It is the fastest, cheapest improvement you can make. (unless you already have one).
Move to Windows 7 x64, buy loads of RAM, and an i7 860 or similar. (Moving from a Core2 dual core gave me a factor of 6..8, building on all CPUs.)
(Don't go cheap on the disks, too.)
Split into separate projects for parallel builds
This is where 8 CPUS (even if 4 physical + HT) with loads of RAM come to play. You can enable per-project parallelization with /MP option, but this is incompatible with many other features.
At one time compilation meant parse the source and generate code. Now though, compilation means parse the source and build up a symbolic database representing the code. The database can then be transformed to resolved references between symbols. Later on, the database is used as the source to generate code.
You haven't got optimizations switched on. That will stop the build process from optimizing the generated code (or at least hint that optimizations shouldn't be done... I wouldn't like to guarantee no optimizations are performed). However, the build process is still optimized. So, multiple .cpp files are being batched together to do this.
I'm not sure how the decision is made as to how many .cpp files get batched together. Maybe the compiler starts processing files until it decides the memory size of the database is large enough such that if it grows any more the system will have to start doing excessive paging of data in and out to disk and the performance gains of batching any more .cpp files would be negated.
Anyway, I don't work for the VC compiler team, so can't answer conclusively, but I always assumed it was doing it for this reason.
There's a new write-up on the Visual C++ Blog that details some undocumented switches that can be used to time/profile various stages of the build process (I'm not sure how much, if any, of the write-up applies to versions of MSVC prior to VS2010). Interesting stuff which should provide at least a little insight into what's going on behind the scenes:
http://blogs.msdn.com/vcblog/archive/2010/04/01/vc-tip-get-detailed-build-throughput-diagnostics-using-msbuild-compiler-and-linker.aspx
If nothing else, it lets you know what processes, dlls, and at least some of the phases of translation/processing correspond to which messages you see in normal build output.
It parallelizes the build (or at least the compile) if you have a multicore CPU
edit: I am pretty sure it parallelizes in the same was as "make -j", it compiles multiple cpp files at the same time (since cpp are generally independent) - but obviously links them once.
On my core-2 machine it is showing 2 devenv jobs while compiling a single project.

Partial builds versus full builds in Visual C++

For most of my development work with Visual C++, I am using partial builds, e.g. press F7 and only changed C++ files and their dependencies get rebuilt, followed by an incremental link. Before passing a version onto testing, I take the precaution of doing a full rebuild, which takes about 45 minutes on my current project. I have seen many posts and articles advocating this action, but wonder is this necessary, and if so, why? Does it affect the delivered EXE or the associated PDB (which we also use in testing)? Would the software function any different from a testing perspective?
For release builds, I'm using VS2005, incremental compilation and linking, precompiled headers.
The partial build system works by checking file dates of source files against the build results. So it can break if you e.g. restore an earlier file from source control. The earlier file would have a modified date earlier than the build product, so the product wouldn't be rebuilt. To protect against these errors, you should do a complete build if it is a final build. While you are developing though, incremental builds are of course much more efficient.
Edit: And of course, doing a full rebuild also shields you from possible bugs in the incremental build system.
The basic problem is that compilation is dependent on the environment (command-line flags, libraries available, and probably some Black Magic), and so two compilations will only have the same result if they are performed in the same conditions. For testing and deployment, you want to make sure that the environments are as controlled as possible and you aren't getting wacky behaviours due to odd code. A good example is if you update a system library, then recompile half the files - half are still trying to use the old code, half are not. In a perfect world, this would either error out right away or not cause any problems, but sadly, sometimes neither of those happen. As a result, doing a complete recompilation avoids a lot of problems associated with a staggered build process.
Hasn't everyone come across this usage pattern? I get weird build errors, and before even investigating I do a full rebuild, and the problem goes away.
This by itself seems to me to be good enough reason to do a full rebuild before a release.
Whether you would be willing to turn an incremental build that completes without problems over to testing, is a matter of taste, I think.
I would definitely recommend it. I have seen on a number of occasions with a large Visual C++ solution the dependency checker fail to pick up some dependency on changed code. When this change is to a header file that effects the size of an object very strange things can start to happen.
I am sure the dependency checker has got better in VS 2008, but I still wouldn't trust it for a release build.
The biggest reason not to ship an incrementally linked binary is that some optimizations are disabled. The linker will leave padding between functions (to make it easier to replace them on the next incremental link). This adds some bloat to the binary. There may be extra jumps as well, which changes the memory access pattern and can cause extra paging and/or cache misses. Older versions of functions may continue to reside in the executable even though they are never called. This also leads to binary bloat and slower performance. And you certainly can't use link-time code generation with incremental linking, so you miss out on more optimizations.
If you're giving a debug build to a tester, then it probably isn't a big deal. But your release candidates should be built from scratch in release mode, preferably on a dedicated build machine with a controlled environment.
Visual Studio has some problems with partial (incremental) builds, (I mostly encountered linking errors) From time to time, it is very useful to have a full rebuild.
In case of long compilation times, there are two solutions:
Use a parallel compilation tool and take advantage of your (assumed) multi core hardware.
Use a build machine. What I use most is a separate build machine, with a CruiseControl set up, that performs full rebuilds from time to time. The "official" release that I provide to the testing team, and, eventually, to the customer, is always taken from the build machine, not from the developer's environment.

How to improve link performance for a large C++ application in VS2005

We have fairly large C++ application which is composed of about 60 projects in Visual Studio 2005. It currently takes 7 minutes to link in Release mode and I would like to try to reduce the time. Are there any tips for improving the link time?
Most of the projects compile to static libraries, this makes testing easier since each one also has a set of associated unit tests. It seems the use of static libraries prevents VS2005 from using incremental linking, so even with incremental linking turned on it does a full link every time.
Would using DLLs for the sub projects make any difference? I don't really want to go through all the headers and add macros to export the symbols (even using a script) but if it would do something to reduce the 7 minute link time I will certainly consider it.
For some reason using nmake from the command line is slightly faster and linking the same application on Linux (with GCC) is much faster.
Visual Studio IDE 7 minutes
Visual C++ using nmake from the command line - 5 minutes
GCC on Linux 34 seconds
If you're using the /GL flag to enable Whole Program Optimization (WPO) or the /LTCG flag to enable Link Time Code Generation, turning them off will improve link times significantly, at the expense of some optimizations.
Also, if you're using the /Z7 flag to put debug symbols in the .obj files, your static libraries are probably huge. Using /Zi to create separate .pdb files might help if it prevents the linker from reading all of the debug symbols from disk. I'm not sure if it actually does help because I have not benchmarked it.
See my suggestion made at Microsoft : https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=511300
You should vote for it ! Here is my last comment on it :
Yes we are using incremental linking to build most of our projects. For the biggest projects, it's useless. In fact, it takes more time to link those projects with incremental linking (2min50 compared to 2min44). We observed that it doesn't work when size of ILK files is big (our biggest project generate an ilk of 262144 KB in win 32).
Bellow, I list others things we tried to reduce link time:
Explicit template instantiation to reduce code bloat. Small gain.
IncrediLink (IncrediBuild give interesting gain for compilation but almost no gain for link).
Remove debug information for libraries who are rarely debugged (good gain).
Delete PDB file in « Pre-Build Event » (strangely it give interesting gain ex: 2min44 instead of 3min34).
Convert many statics library to DLL. Important gain.
Working with computer equiped with lot of RAM in order to maximize disk cache. The biggest gain.
Big obj versus small obj. No difference.
Change project options (/Ob1, /INCREMENTAL, Enable COMDAT folding, Embedding manifest, etc.). Some give interesting gain other not. We try to continuously maximize our settings.
Maximize Internal linkage vs External linkage. It's a good programming practice.
Separate software component as much as we can afford. You can than work in unit test that link fast. But we still have to interate things together, we have legacy code and we worked with third party component.
Use secret linker switch /expectedoutputsize:120000000. Small gain.
Note that for all our experimentation, we meticulously measured link time. Slow link time seriously cost in productivity. When you implement complex algorithm or track difficult bug, you want to iterate rapidly this sequence : modify some code, link, trace debug, modify some code, link, etc...
Another point to optimize link time is the impact it have on our continuous integration cycle. We have many applications that shared common code and we are running continuous integration on it. Link time of all our applications took half the cycle time (15 minutes)...
In thread https://blogs.msdn.microsoft.com/vcblog/2009/09/10/linker-throughput/, some interesting suggestions were made to improve link time. On a 64 bits computer, why not offering an option to work with file completely in RAM ?
Again, any suggestions that may help us reduce link time is welcome.
Generally, using DLLs instead of static libraries will improve linking times quite a bit.
Take a look at Incredibuild by Xoreax. Its distributed compilation dramatically reduced our full build/link times from around 40 minutes to 8 minutes.
Additionally, this product has a feature they call Incredilink which should help you get incremental links working even with statically linked libraries.
I don't think converting to DLLs would be useful.
You could try looking for options to do with optimisation, and turning them off. The linker might be spending a long time looking over the libs for redundant code it can eliminate. Your app may end up bigger or slower, but that may not be a problem to you.
Several people have reported (and I myself have noticed) that modifying a file in a statically linked library will disable incremental linking for the entire solution; this appears to be what you are seeing. See comments here and here for some information about that.
One workaround is to use the Fast Solution Build Add-In. This might involve making a few changes to your workspace, but the payoff is definitely worth it. For a commercial solution, use Xoreax's Incredibuild, which basically incorporates this same technology but adds other features as well. I apologize if I sound like a salesman for Incredibuild - I'm just a very satisfied customer.
I've had similar troubles linking large apps with Visual C++ before. In my case, I simply didn't have enough free RAM and excessive paging to disk was slowing the linking process to a halt. Doubling my RAM from 1GB to 2GB made a dramatic improvement. How much is your dev box running?
I just found out that we had by accident defined a large table of strings in a header file which got included in pretty much every (static) lib. (I am talking about a huge C++ project.) When the linker created the EXE, it looks like the unification of the table (there is only a single one ending up in the EXE) or the parsing of the libs took forever. Putting the table in a separate C++ file took a couple of minutes of the link on a relatively slow machine.
Unfortunately, I don't know how to find stuff like that other than by chance.
For debug builds, then one can use incremental linking, which can improve link times a lot.
Sadly enough there are certain pitfalls, and VS2005 will not warn you.
If using static libraries then incremental linking will not work if modifying a file part for the static library. The solution is to set the linker option "Use Library Dependency Inputs" to "Yes" (This is the same as Fast Solution Build in VS2003)
If using pragma-comment-lib to include the lib of a DLL, and specifies a relative path instead of the lib alone, then incremental linking will stop working. The solution is to specify the lib alone, and use the linker-option LIBPATH to add additional lib-path.
Some times the .ilk file will become corrupted (grow beyond 200 MByte) and then suddenly the incremental linker til take more than 10 times the normal time. Some times it will complain about the .ilk file being corrupt, but usually first after several minutes. The solution for me was to setup the following command for the "Build Event" -> "Pre-Link Event"
for %%f in ($(IntDir)*.ilk) do ( if "%%~zf" GTR "200000000" (del %%f))
60 libs to link does sound like a fair few. This may be a bit of an extreme measure, but it might radically speed things up. Create a new solution, with a few projects, and add all the source from your existing projects to these. Then build and link them instead, and just keep the small ones for testing.
Get a quicker computer with multiple processors and enable parallel builds (this might be on by default). To allow the greatest amount of parallism, make sure your project dependencies are correct and you haven't got unnecessary dependencies.
If you are truly talking about link times, then things like fast solution build and Xoreax won't really help much (except for Incredilink, which might). Assuming that you are truly measuring link start to link end, then I would suggest that the number of libs that you have is the issue.
The link phase is, at least initially, IO bound in loading up all of the object and lib files. You might be in a situation where you have 60 libraries along with the main project of some large number of .obj files. I suspect that you simply might be seeing, at least in part, typical windows slowness in loading up all of those libs and .obj files.
You can easily test this. Take all of those lib files and build one single lib file just as a test. Instead of linking with 60 of them, link with one and see where your time goes. That would be interesting.
NTFS is notoriosly slow. It shoudln't be 7m vs. 32 seconds on Linux slow, but it might be part of the issue. Using DLL's will help but you will suffer application startup time, although that will not be early as bad. I would be confident that you won't have 7m application start up times.
you can try looking at this: http://msdn.microsoft.com/en-us/library/9h3z1a69.aspx
Basically, you can run project builds in parallel if you have several cores.
I had solved my link problem and share to all of you.
My project's link time was 7 min with /Incremental:no linking (link time 7min).
Was 15 min with /Incremental, (link time 7min, embeded manifest time 7min). So I turn off the inremental.
I found Additional dependencies has a.lib AND ignore specific libraries has, too!
So I remove it from Ignore specific libraries turn on the /incremental. first link time need 5min but embeded manifest time has none.
I don't know why, but the incremental linking has worked.
I rollback all project code , so I could find the problem by the lib.
If you do all of above, you can try my method. Good luck!
Step 1 in C++ build time reduction is more memory. After switching from 4GB to 12GB, I saw my link-all-projects time fall off a cliff: from 5:50 to 1:15.