What steps can be taken to reduce link time? [duplicate] - c++

We have fairly large C++ application which is composed of about 60 projects in Visual Studio 2005. It currently takes 7 minutes to link in Release mode and I would like to try to reduce the time. Are there any tips for improving the link time?
Most of the projects compile to static libraries, this makes testing easier since each one also has a set of associated unit tests. It seems the use of static libraries prevents VS2005 from using incremental linking, so even with incremental linking turned on it does a full link every time.
Would using DLLs for the sub projects make any difference? I don't really want to go through all the headers and add macros to export the symbols (even using a script) but if it would do something to reduce the 7 minute link time I will certainly consider it.
For some reason using nmake from the command line is slightly faster and linking the same application on Linux (with GCC) is much faster.
Visual Studio IDE 7 minutes
Visual C++ using nmake from the command line - 5 minutes
GCC on Linux 34 seconds

If you're using the /GL flag to enable Whole Program Optimization (WPO) or the /LTCG flag to enable Link Time Code Generation, turning them off will improve link times significantly, at the expense of some optimizations.
Also, if you're using the /Z7 flag to put debug symbols in the .obj files, your static libraries are probably huge. Using /Zi to create separate .pdb files might help if it prevents the linker from reading all of the debug symbols from disk. I'm not sure if it actually does help because I have not benchmarked it.

See my suggestion made at Microsoft : https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=511300
You should vote for it ! Here is my last comment on it :
Yes we are using incremental linking to build most of our projects. For the biggest projects, it's useless. In fact, it takes more time to link those projects with incremental linking (2min50 compared to 2min44). We observed that it doesn't work when size of ILK files is big (our biggest project generate an ilk of 262144 KB in win 32).
Bellow, I list others things we tried to reduce link time:
Explicit template instantiation to reduce code bloat. Small gain.
IncrediLink (IncrediBuild give interesting gain for compilation but almost no gain for link).
Remove debug information for libraries who are rarely debugged (good gain).
Delete PDB file in « Pre-Build Event » (strangely it give interesting gain ex: 2min44 instead of 3min34).
Convert many statics library to DLL. Important gain.
Working with computer equiped with lot of RAM in order to maximize disk cache. The biggest gain.
Big obj versus small obj. No difference.
Change project options (/Ob1, /INCREMENTAL, Enable COMDAT folding, Embedding manifest, etc.). Some give interesting gain other not. We try to continuously maximize our settings.
Maximize Internal linkage vs External linkage. It's a good programming practice.
Separate software component as much as we can afford. You can than work in unit test that link fast. But we still have to interate things together, we have legacy code and we worked with third party component.
Use secret linker switch /expectedoutputsize:120000000. Small gain.
Note that for all our experimentation, we meticulously measured link time. Slow link time seriously cost in productivity. When you implement complex algorithm or track difficult bug, you want to iterate rapidly this sequence : modify some code, link, trace debug, modify some code, link, etc...
Another point to optimize link time is the impact it have on our continuous integration cycle. We have many applications that shared common code and we are running continuous integration on it. Link time of all our applications took half the cycle time (15 minutes)...
In thread https://blogs.msdn.microsoft.com/vcblog/2009/09/10/linker-throughput/, some interesting suggestions were made to improve link time. On a 64 bits computer, why not offering an option to work with file completely in RAM ?
Again, any suggestions that may help us reduce link time is welcome.

Generally, using DLLs instead of static libraries will improve linking times quite a bit.

Take a look at Incredibuild by Xoreax. Its distributed compilation dramatically reduced our full build/link times from around 40 minutes to 8 minutes.
Additionally, this product has a feature they call Incredilink which should help you get incremental links working even with statically linked libraries.

I don't think converting to DLLs would be useful.
You could try looking for options to do with optimisation, and turning them off. The linker might be spending a long time looking over the libs for redundant code it can eliminate. Your app may end up bigger or slower, but that may not be a problem to you.

Several people have reported (and I myself have noticed) that modifying a file in a statically linked library will disable incremental linking for the entire solution; this appears to be what you are seeing. See comments here and here for some information about that.
One workaround is to use the Fast Solution Build Add-In. This might involve making a few changes to your workspace, but the payoff is definitely worth it. For a commercial solution, use Xoreax's Incredibuild, which basically incorporates this same technology but adds other features as well. I apologize if I sound like a salesman for Incredibuild - I'm just a very satisfied customer.

I've had similar troubles linking large apps with Visual C++ before. In my case, I simply didn't have enough free RAM and excessive paging to disk was slowing the linking process to a halt. Doubling my RAM from 1GB to 2GB made a dramatic improvement. How much is your dev box running?

I just found out that we had by accident defined a large table of strings in a header file which got included in pretty much every (static) lib. (I am talking about a huge C++ project.) When the linker created the EXE, it looks like the unification of the table (there is only a single one ending up in the EXE) or the parsing of the libs took forever. Putting the table in a separate C++ file took a couple of minutes of the link on a relatively slow machine.
Unfortunately, I don't know how to find stuff like that other than by chance.

For debug builds, then one can use incremental linking, which can improve link times a lot.
Sadly enough there are certain pitfalls, and VS2005 will not warn you.
If using static libraries then incremental linking will not work if modifying a file part for the static library. The solution is to set the linker option "Use Library Dependency Inputs" to "Yes" (This is the same as Fast Solution Build in VS2003)
If using pragma-comment-lib to include the lib of a DLL, and specifies a relative path instead of the lib alone, then incremental linking will stop working. The solution is to specify the lib alone, and use the linker-option LIBPATH to add additional lib-path.
Some times the .ilk file will become corrupted (grow beyond 200 MByte) and then suddenly the incremental linker til take more than 10 times the normal time. Some times it will complain about the .ilk file being corrupt, but usually first after several minutes. The solution for me was to setup the following command for the "Build Event" -> "Pre-Link Event"
for %%f in ($(IntDir)*.ilk) do ( if "%%~zf" GTR "200000000" (del %%f))

60 libs to link does sound like a fair few. This may be a bit of an extreme measure, but it might radically speed things up. Create a new solution, with a few projects, and add all the source from your existing projects to these. Then build and link them instead, and just keep the small ones for testing.

Get a quicker computer with multiple processors and enable parallel builds (this might be on by default). To allow the greatest amount of parallism, make sure your project dependencies are correct and you haven't got unnecessary dependencies.

If you are truly talking about link times, then things like fast solution build and Xoreax won't really help much (except for Incredilink, which might). Assuming that you are truly measuring link start to link end, then I would suggest that the number of libs that you have is the issue.
The link phase is, at least initially, IO bound in loading up all of the object and lib files. You might be in a situation where you have 60 libraries along with the main project of some large number of .obj files. I suspect that you simply might be seeing, at least in part, typical windows slowness in loading up all of those libs and .obj files.
You can easily test this. Take all of those lib files and build one single lib file just as a test. Instead of linking with 60 of them, link with one and see where your time goes. That would be interesting.
NTFS is notoriosly slow. It shoudln't be 7m vs. 32 seconds on Linux slow, but it might be part of the issue. Using DLL's will help but you will suffer application startup time, although that will not be early as bad. I would be confident that you won't have 7m application start up times.

you can try looking at this: http://msdn.microsoft.com/en-us/library/9h3z1a69.aspx
Basically, you can run project builds in parallel if you have several cores.

I had solved my link problem and share to all of you.
My project's link time was 7 min with /Incremental:no linking (link time 7min).
Was 15 min with /Incremental, (link time 7min, embeded manifest time 7min). So I turn off the inremental.
I found Additional dependencies has a.lib AND ignore specific libraries has, too!
So I remove it from Ignore specific libraries turn on the /incremental. first link time need 5min but embeded manifest time has none.
I don't know why, but the incremental linking has worked.
I rollback all project code , so I could find the problem by the lib.
If you do all of above, you can try my method. Good luck!

Step 1 in C++ build time reduction is more memory. After switching from 4GB to 12GB, I saw my link-all-projects time fall off a cliff: from 5:50 to 1:15.

Related

How to profile building?

I am working on a large (~1 mloc) C++-application which takes too long to build from source (on windows using Visual Studio, on the mac using a Makefile or XCode). I would like to know where to start optimizing (e.g. precompiled headers, forward declarations, ...).
As with performance of the application itself, I would like to profile the build process before I start optimizing.
What tools are available to support this?
Firstly, please state exactly which version of Visual Studio you're using. If possible, upgrade to VS2010 as this has much better support for parallel building. Here's several things to consider:
Put the source tree on a different disk to the system disk. If you can extend to 2 SSDs (1 for system, 1 for source) then this makes a huge difference
Enable parallel builds. In VS2010 this halved our build time for a project about the same size as yours. Enable the 'Multiprocessor compilation' switch (/MP). You may find that one or two of your projects may need this turned off if they have strange dependencies, but as long as it's on for most projects then you'll get a massive boost.
VS2010 has verbose build timing logging options which can help you isolate the time spent in different projects. VS2005/2008 have a build timing option
If you have VS2005 or VS2008 then try out the MPCL plugin (it's not free but very cheap) which will do better parallel building than VS itself. If you have the budget there are tools like Incredibuild
If you're using Makefiles then use the -j flag to parallelise. If you're using Xcode then you can use distributed builds if you have other macs available (I've never had any luck with this myself though)
You could look into using ccache with gcc
Enable Precompiled headers for all or most projects. It may take a bit of experimenting to work out how much benefit you get -- you do hit diminishing returns quite quickly the more you put in them (and the more you have in, the more rebuilds you'll need to do)
Read John Lakos's book on Large Scale C++ Design which is a fantastic source of advice for how to split up large projects to isolate dependencies
Consider a two-stage build process. If you have lots of third party libraries that need to be built, or other libraries that don't change all that often then set up a separate project for them. Try building that in parallel with your main project or save the binaries. Consider checking the binaries into your source control system (yes, I know checking binaries into SCM is generally considered evil, but I believe you have to be pragmatic)
There are many ways of improving build-times. One of them is of course more hardware, i.e. faster disks and more RAM. Another is features of the compiler like precompiled headers. There are also external tools that can help, like distcc or ccache. For GNU make, there is also the -j option to run several make processes in parallel.

Why does C++ linking use virtually no CPU?

On a native C++ project, linking right now can take a minute or two. Yet, during this time CPU drops from 100% during compilation to virtually zero. Does this mean linking is primarily a disk activity?
If so, is this the main area an SSD would make big changes? But, why aren't all my OBJ files (or as many as possible) kept in RAM after compilation to avoid this? With 4 GB of RAM I should be able to save a lot of disk access and make it CPU-bound again, no?
Update: so the obvious follow-up is, can the VC++ compiler and linker talk together better to streamline things and keep OBJ files in memory, similar to how Delphi does it?
Linking is indeed primarily a disk-based activity. Borland Pascal (back in the day) would keep the entire program in memory, which is why it would link so fast.
Your OBJ files aren't kept in RAM because the compiler and linker are separate programs. If your development environment had an integrated compiler and linker (instead of running them as a separate processes), it could indeed keep everything in RAM.
But you would lose the ability to separate the development environment from the compilers and/or linkers - you would have to use the same compiler/linker, and you wouldn't be able to run the compiler outside the environment.
You can try installing some of those RAM disks utilities and keep your obj directory on the RAM disk or even whole project directory. That should speed it up considerably.
Don't forget to make it permanent afterwards :-D
The Visual Studio linker is largely I/O bound, but how much so depends on a few variables.
Incremental linking (common in Debug builds) generally requires a lot less I/O.
Writing a PDB file (for symbols) can consume a lot of the time. It's a specific bottleneck that Microsoft targeted in VS 2010. The PDB writing is now done asynchronously. I haven't tried it, but I've heard it can help link times quite a bit.
If you using link-time code generation (LTCG) (common in Release builds), you have all the usual I/O initially. Then, the linker re-invokes the compiler to re-generate code for sections that can be further optimized. This portion is generally much more CPU-intensive. Off hand, I don't know if the linker actually spins up the compiler in a separate process and waits (in which case you'll still see low CPU usage for the linker process), or if the compilation is done in the linker process (in which case you'll see the linker go through phases of heavy-I/O then heavy-CPU).
Using an SSD can help with the I/O bound portions. Simply having a second drive can help, too. For example, if your source and objects are all on one drive, and you write your PDB to a separate drive, the linker should spend less time waiting for the PDB writer. Having a second spinning drive has helped my current team's link times dramatically.
In debug builds in Visual Studio you can use incremental linking which allows you to usually avoid a lot of the time spent on linking. Basically it means that instead of linking the whole EXE (or DLL) file from scratch it builds upon the one you last linked, replacing only the things that changed.
This is however not recommended for release builds since it adds some overhead in runtime and can result in an EXE file that is several times larger than the usual.
It's hard to say what exactly is taking the linker so long without knowing how it is interacting with the OS. Thankfully, Microsoft provides Process Monitor so you can do just that.
It's helped me diagnose bugs with the Visual Studio IDE and debugger without access to source.

Partial builds versus full builds in Visual C++

For most of my development work with Visual C++, I am using partial builds, e.g. press F7 and only changed C++ files and their dependencies get rebuilt, followed by an incremental link. Before passing a version onto testing, I take the precaution of doing a full rebuild, which takes about 45 minutes on my current project. I have seen many posts and articles advocating this action, but wonder is this necessary, and if so, why? Does it affect the delivered EXE or the associated PDB (which we also use in testing)? Would the software function any different from a testing perspective?
For release builds, I'm using VS2005, incremental compilation and linking, precompiled headers.
The partial build system works by checking file dates of source files against the build results. So it can break if you e.g. restore an earlier file from source control. The earlier file would have a modified date earlier than the build product, so the product wouldn't be rebuilt. To protect against these errors, you should do a complete build if it is a final build. While you are developing though, incremental builds are of course much more efficient.
Edit: And of course, doing a full rebuild also shields you from possible bugs in the incremental build system.
The basic problem is that compilation is dependent on the environment (command-line flags, libraries available, and probably some Black Magic), and so two compilations will only have the same result if they are performed in the same conditions. For testing and deployment, you want to make sure that the environments are as controlled as possible and you aren't getting wacky behaviours due to odd code. A good example is if you update a system library, then recompile half the files - half are still trying to use the old code, half are not. In a perfect world, this would either error out right away or not cause any problems, but sadly, sometimes neither of those happen. As a result, doing a complete recompilation avoids a lot of problems associated with a staggered build process.
Hasn't everyone come across this usage pattern? I get weird build errors, and before even investigating I do a full rebuild, and the problem goes away.
This by itself seems to me to be good enough reason to do a full rebuild before a release.
Whether you would be willing to turn an incremental build that completes without problems over to testing, is a matter of taste, I think.
I would definitely recommend it. I have seen on a number of occasions with a large Visual C++ solution the dependency checker fail to pick up some dependency on changed code. When this change is to a header file that effects the size of an object very strange things can start to happen.
I am sure the dependency checker has got better in VS 2008, but I still wouldn't trust it for a release build.
The biggest reason not to ship an incrementally linked binary is that some optimizations are disabled. The linker will leave padding between functions (to make it easier to replace them on the next incremental link). This adds some bloat to the binary. There may be extra jumps as well, which changes the memory access pattern and can cause extra paging and/or cache misses. Older versions of functions may continue to reside in the executable even though they are never called. This also leads to binary bloat and slower performance. And you certainly can't use link-time code generation with incremental linking, so you miss out on more optimizations.
If you're giving a debug build to a tester, then it probably isn't a big deal. But your release candidates should be built from scratch in release mode, preferably on a dedicated build machine with a controlled environment.
Visual Studio has some problems with partial (incremental) builds, (I mostly encountered linking errors) From time to time, it is very useful to have a full rebuild.
In case of long compilation times, there are two solutions:
Use a parallel compilation tool and take advantage of your (assumed) multi core hardware.
Use a build machine. What I use most is a separate build machine, with a CruiseControl set up, that performs full rebuilds from time to time. The "official" release that I provide to the testing team, and, eventually, to the customer, is always taken from the build machine, not from the developer's environment.

What are the pros + cons of Link-Time Code Generation? (VS 2005)

I've heard that enabling Link-Time Code Generation (the /LTCG switch) can be a major optimization for large projects with lots of libraries to link together. My team is using it in the Release configuration of our solution, but the long compile-time is a real drag. One change to one file that no other file depends on triggers another 45 seconds of "Generating code...". Release is certainly much faster than Debug, but we might achieve the same speed-up by disabling LTCG and just leaving /O2 on.
Is it worth it to leave /LTCG enabled?
It is hard to say, because that depends mostly on your project - and of course the quality of the LTCG provided by VS2005 (which I don't have enough experience with to judge). In the end, you'll have to measure.
However, I wonder why you have that much problems with the extra duration of the release build. You should only hand out reproducible, stable, versioned binaries that have reproducible or archived sources. I've rarely seen a reason for frequent, incremental release builds.
The recommended setup for a team is this:
Developers typically create only incremental debug builds on their machines. Building a release should be a complete build from source control to redistributable (binaries or even setup), with a new version number and labeling/archiving the sources. Only these should be given to in-house testers / clients.
Ideally, you would move the complete build to a separate machine, or maybe a virtual machine on a good PC. This gives you a stable environment for your builds (includes, 3rd party libraries, environment variables, etc.).
Ideally, these builds should be automated ("one click from source control to setup"), and should run daily.
It allows the linker to do the actual compilation of the code, and therefore it can do more optimization such as inlining.
If you don't use LTCG, the compiler is the only component in the build process that can inline a function, as in replace a "call" to a function with the contents of the function, which is usually a lot faster. The compiler would only do so anyway for functions where this yields an improvement.
It can therefore only do so with functions that it has the body of. This means that if a function in the cpp file calls another function which is not implemented in the same cpp file (or in a header file that is included) then it doesn't have the actual body of the function and can therefore not inline it.
But if you use LTCG, it's the linker that does the inlining, and it has all the functions in all the of the cpp files of the entire project, minus referenced lib files that were not built with LTCG. This gives the linker (which becomes the compiler) a lot more to work with.
But it also makes your build take longer, especially when doing incremental changes. You might want to turn on LTCG in your release build configuration.
Note that LTCG is not the same as profile-guided optimization.
I know the guys at Bungie used it for Halo3, the only con they mentioned was that it sometimes messed up their deterministic replay data.
Have you profiled your code and determined the need for this? We actually run our servers almost entirely in debug mode, but special-case a few files that profiled as performance critical. That's worked great, and has kept things debuggable when there are problems.
Not sure what kind of app you're making, but breaking up data structures to correspond to the way they were processed in code (for better cache coherency) was a much bigger win for us.
I've found the downsides are longer compile times and that the .obj files made in that mode (LTCG turned on) can be really massive. For example, .obj files that might be 200-500k were about 2-3mb. It just to happened to me that compiling a bunch of projects in my chain led to a 2 gb folder, the bulk of which was .obj files.
I also don't see problems with extra compilation time using link-time code generation with the release build. I only build my release version once per day (overnight), and use the unit-test and debug builds during the day.

How to improve link performance for a large C++ application in VS2005

We have fairly large C++ application which is composed of about 60 projects in Visual Studio 2005. It currently takes 7 minutes to link in Release mode and I would like to try to reduce the time. Are there any tips for improving the link time?
Most of the projects compile to static libraries, this makes testing easier since each one also has a set of associated unit tests. It seems the use of static libraries prevents VS2005 from using incremental linking, so even with incremental linking turned on it does a full link every time.
Would using DLLs for the sub projects make any difference? I don't really want to go through all the headers and add macros to export the symbols (even using a script) but if it would do something to reduce the 7 minute link time I will certainly consider it.
For some reason using nmake from the command line is slightly faster and linking the same application on Linux (with GCC) is much faster.
Visual Studio IDE 7 minutes
Visual C++ using nmake from the command line - 5 minutes
GCC on Linux 34 seconds
If you're using the /GL flag to enable Whole Program Optimization (WPO) or the /LTCG flag to enable Link Time Code Generation, turning them off will improve link times significantly, at the expense of some optimizations.
Also, if you're using the /Z7 flag to put debug symbols in the .obj files, your static libraries are probably huge. Using /Zi to create separate .pdb files might help if it prevents the linker from reading all of the debug symbols from disk. I'm not sure if it actually does help because I have not benchmarked it.
See my suggestion made at Microsoft : https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=511300
You should vote for it ! Here is my last comment on it :
Yes we are using incremental linking to build most of our projects. For the biggest projects, it's useless. In fact, it takes more time to link those projects with incremental linking (2min50 compared to 2min44). We observed that it doesn't work when size of ILK files is big (our biggest project generate an ilk of 262144 KB in win 32).
Bellow, I list others things we tried to reduce link time:
Explicit template instantiation to reduce code bloat. Small gain.
IncrediLink (IncrediBuild give interesting gain for compilation but almost no gain for link).
Remove debug information for libraries who are rarely debugged (good gain).
Delete PDB file in « Pre-Build Event » (strangely it give interesting gain ex: 2min44 instead of 3min34).
Convert many statics library to DLL. Important gain.
Working with computer equiped with lot of RAM in order to maximize disk cache. The biggest gain.
Big obj versus small obj. No difference.
Change project options (/Ob1, /INCREMENTAL, Enable COMDAT folding, Embedding manifest, etc.). Some give interesting gain other not. We try to continuously maximize our settings.
Maximize Internal linkage vs External linkage. It's a good programming practice.
Separate software component as much as we can afford. You can than work in unit test that link fast. But we still have to interate things together, we have legacy code and we worked with third party component.
Use secret linker switch /expectedoutputsize:120000000. Small gain.
Note that for all our experimentation, we meticulously measured link time. Slow link time seriously cost in productivity. When you implement complex algorithm or track difficult bug, you want to iterate rapidly this sequence : modify some code, link, trace debug, modify some code, link, etc...
Another point to optimize link time is the impact it have on our continuous integration cycle. We have many applications that shared common code and we are running continuous integration on it. Link time of all our applications took half the cycle time (15 minutes)...
In thread https://blogs.msdn.microsoft.com/vcblog/2009/09/10/linker-throughput/, some interesting suggestions were made to improve link time. On a 64 bits computer, why not offering an option to work with file completely in RAM ?
Again, any suggestions that may help us reduce link time is welcome.
Generally, using DLLs instead of static libraries will improve linking times quite a bit.
Take a look at Incredibuild by Xoreax. Its distributed compilation dramatically reduced our full build/link times from around 40 minutes to 8 minutes.
Additionally, this product has a feature they call Incredilink which should help you get incremental links working even with statically linked libraries.
I don't think converting to DLLs would be useful.
You could try looking for options to do with optimisation, and turning them off. The linker might be spending a long time looking over the libs for redundant code it can eliminate. Your app may end up bigger or slower, but that may not be a problem to you.
Several people have reported (and I myself have noticed) that modifying a file in a statically linked library will disable incremental linking for the entire solution; this appears to be what you are seeing. See comments here and here for some information about that.
One workaround is to use the Fast Solution Build Add-In. This might involve making a few changes to your workspace, but the payoff is definitely worth it. For a commercial solution, use Xoreax's Incredibuild, which basically incorporates this same technology but adds other features as well. I apologize if I sound like a salesman for Incredibuild - I'm just a very satisfied customer.
I've had similar troubles linking large apps with Visual C++ before. In my case, I simply didn't have enough free RAM and excessive paging to disk was slowing the linking process to a halt. Doubling my RAM from 1GB to 2GB made a dramatic improvement. How much is your dev box running?
I just found out that we had by accident defined a large table of strings in a header file which got included in pretty much every (static) lib. (I am talking about a huge C++ project.) When the linker created the EXE, it looks like the unification of the table (there is only a single one ending up in the EXE) or the parsing of the libs took forever. Putting the table in a separate C++ file took a couple of minutes of the link on a relatively slow machine.
Unfortunately, I don't know how to find stuff like that other than by chance.
For debug builds, then one can use incremental linking, which can improve link times a lot.
Sadly enough there are certain pitfalls, and VS2005 will not warn you.
If using static libraries then incremental linking will not work if modifying a file part for the static library. The solution is to set the linker option "Use Library Dependency Inputs" to "Yes" (This is the same as Fast Solution Build in VS2003)
If using pragma-comment-lib to include the lib of a DLL, and specifies a relative path instead of the lib alone, then incremental linking will stop working. The solution is to specify the lib alone, and use the linker-option LIBPATH to add additional lib-path.
Some times the .ilk file will become corrupted (grow beyond 200 MByte) and then suddenly the incremental linker til take more than 10 times the normal time. Some times it will complain about the .ilk file being corrupt, but usually first after several minutes. The solution for me was to setup the following command for the "Build Event" -> "Pre-Link Event"
for %%f in ($(IntDir)*.ilk) do ( if "%%~zf" GTR "200000000" (del %%f))
60 libs to link does sound like a fair few. This may be a bit of an extreme measure, but it might radically speed things up. Create a new solution, with a few projects, and add all the source from your existing projects to these. Then build and link them instead, and just keep the small ones for testing.
Get a quicker computer with multiple processors and enable parallel builds (this might be on by default). To allow the greatest amount of parallism, make sure your project dependencies are correct and you haven't got unnecessary dependencies.
If you are truly talking about link times, then things like fast solution build and Xoreax won't really help much (except for Incredilink, which might). Assuming that you are truly measuring link start to link end, then I would suggest that the number of libs that you have is the issue.
The link phase is, at least initially, IO bound in loading up all of the object and lib files. You might be in a situation where you have 60 libraries along with the main project of some large number of .obj files. I suspect that you simply might be seeing, at least in part, typical windows slowness in loading up all of those libs and .obj files.
You can easily test this. Take all of those lib files and build one single lib file just as a test. Instead of linking with 60 of them, link with one and see where your time goes. That would be interesting.
NTFS is notoriosly slow. It shoudln't be 7m vs. 32 seconds on Linux slow, but it might be part of the issue. Using DLL's will help but you will suffer application startup time, although that will not be early as bad. I would be confident that you won't have 7m application start up times.
you can try looking at this: http://msdn.microsoft.com/en-us/library/9h3z1a69.aspx
Basically, you can run project builds in parallel if you have several cores.
I had solved my link problem and share to all of you.
My project's link time was 7 min with /Incremental:no linking (link time 7min).
Was 15 min with /Incremental, (link time 7min, embeded manifest time 7min). So I turn off the inremental.
I found Additional dependencies has a.lib AND ignore specific libraries has, too!
So I remove it from Ignore specific libraries turn on the /incremental. first link time need 5min but embeded manifest time has none.
I don't know why, but the incremental linking has worked.
I rollback all project code , so I could find the problem by the lib.
If you do all of above, you can try my method. Good luck!
Step 1 in C++ build time reduction is more memory. After switching from 4GB to 12GB, I saw my link-all-projects time fall off a cliff: from 5:50 to 1:15.