Low performance of Incremental linking in Visual Studio C++ - c++

I have a large binary which is built of many static libs and standalone cpp files. It is configured to use incremental linking, all optimizations are disabled by /Od - it is debug build.
I noticed that if I change any standalone cpp file then incremental linking runs fast - 1 min. But if I change any cpp in any static lib then it runs long - 10 min, the same time as ordinary linking. In this case I gain no benefit from incremental linking. Is it possible to speedup it? I use VS2005.

Set "Use Library Dependency Inputs" in the Linker General property page for your project. That will link the individual .obj files from the dependency .lib instead of the .lib, which may have some different side effects.

I going to give you a different type of answer. Hardware.
What is your development environment? Is there anyway to get more RAM or to put your project onto an Solid State Drive? I found that using a SSD sped up my link times by an order of magnitude on my work projects. Helped a little for compile times, but the linking was huge. Getting a faster system of course also helped.

If I understand correctly (after using Visual Stuio for some years), the incremental linking feature does not work for object files that is part of static libraries.
One way to solve this is to restructure your solution so that your application project contains all source files.

Related

What are the advantages / disadvantages of compiling a github library before including it in your project?

What the the benefits of compiling a library prior to linking it to your project?
My understanding is that a non compiled version (if available) will result in a smaller overall size. Why would this not be the preferred way to include libraries then?
What do you exactly mean by "compiling library" prior to linking? You can compile it as static library and link it, in this case (assuming that LTO is used) result will be the same as compiling it as part of your project. You can compile it as shared library and link it. First case will result in smaller overall size when your project has only a single build artifact so you'll benefit from having only necessary parts of library code present in your build artifact. Second case will result is smaller overall size when your project has several build artifacts so you'll benefit from avoiding library code duplication in every build artifact.

Do .lib files have to be linked every time a project is compiled in Visual Studio 2015?

Right now in a project I'm working on, compile times are taking very long.
We think it's due to the fact that it is linking all the library files every time it has to recompile the project.
Can we speed this up somehow? Do .libs have to be linked every single time, even when making very small changes?
Yes, object libraries have to be re-linked every time the program compiles.
However, you can make this less painful by making those other projects into DLL projects, which delays the linking until runtime, rather than compile time. That can make the program take a little longer to start up (depending on certain circumstances) and it'll make managing the project output a little more cumbersome, but it'll speed up project compilation by a significant factor.
If you're working with third-party libraries, see if they have DLL versions of the object code (many do), or recompile them as a DLL (if you have the source code), and use those instead. Depending on the library, you may need to make adjustments to your project configuration.

Compiler performance: multiple libraries vs one library in CMake

I've long wondered when writing my C++ CMake files which is faster for the compiler:
Putting all my project's cpp files in a single shared .so library using add_library()
Using multiple libraries, one for each class / logical component.
Internet searching has not turned up any relevant articles about this curiosity so I thought I'd run a very simple experiment. I made two version of my project - one with a single library and one with two libraries. I found that, over runs, the single library version was 25% faster when compiling from scratch, and 1% faster when compiling after modifying a single line in one of the files. Granted, my experiments were not very thorough.
This is surprising to me - I've always thought having multiple libraries would speed up compiling of small changes in just one of the libraries, because the compiler wouldn't have to touch other libraries that did not depend on it.
Does anyone know a general performance rule about CMake libraries?
Can anyone explain why I see this difference?
Thanks!
Update:
I'm compiling on Ubuntu 14.04 with gcc 4.8.4

Incredibuild linking

i use incredibuild for parallel compiling...
i also need parallel linking but i couldnt manage it for linking.
do you know is it possible to make it parallel?
if there is a way, could you tell me?
if not, do you know any other tools for this purpose?
i have too many projects and i need to link them on seperate machines..
Linking is not really suceptible to parallel processing because it is the natural serialisation point in the production of a executable:
multiple developers can write code in parallel, because code is in many different source files
compilers can compile code in parallel, because multiple instances of the compiler can take the many source files and produce many object files
the linker cannot (easily) work in parallel, because it takes many object files and produces a single executable
So I think you will be out of luck, certainly for the commonly used linkers such as MS and gcc.
IncrediBuild supports the ability to execute linking in parallel.
Go to Agent Settings->Visual Studio Builds->General->Allow linking steps to run in parallel
You can link two /projects/ in parallel.
You cannot link a single project in parallel. This is because Incredibuild is not a compiler or linker itself - it is just a coordinator on top of the existing VS tools. It spins up multiple instances of the compiler for different source files but the VS linker can only be invoked once to link an entire project.
I used Incredibuild for a while but it has some bugs with edge cases (e.g. ActiveX interop wrappers) that caused too much trouble. Add to this that Visual Studio can do multi-threaded compiles anyway makes it not worth the money. (Aside: it is undocumented, but you can do multi-threaded compile in VS2005 by adding /MP C++ project properties.)
There are some general setting suggestions on Improving link time with Incredibuild
You can also skip linking of static libs where you won't distribute them using Incredilink
We found that addition of a signing post build step would stop incredibuild from working on following projects, adding a comment to post build was supposed to help
rem IncrediBuild_AllowOverlap
See IncrediBuild_AllowOverlap doc

why are my visual studio .obj files are massive in size compared to the output .exe?

As a background, I am a developer of an opensource project, a c++ library called openframeworks, that is a wrapper for different libraries, like opengl, quicktime, freeImage, etc. In the next release, we've added a c++ library called POCO, which is similar to boost in some ways in that it's an alternative for java foundation library type functionality.
I've just noticed, that in this latest release where I've added the POCO library as a statically linked library, the .obj files that are produced during the act of compilation are really massive - for example, several .obj files for really small .cpp files are 2mb each. The overall compiled .obj files are about 12mb or so. On the flip side, the exes that are produced are small - 300k to 1mb.
In comparison, the same library compiled in code::blocks produces .obj files that are roughly the same size at the exe - they are all fairly small.
Is there something happening with linking, and the .obj process in visual studio that I don't understand? for example, is it doing some kind of smart prelinking, or other thing, that's adding to the .obj size? I've experimented a bit with settings, such as incremental linking, etc, and not seen any changes.
thanks in advance for any ideas to try or insights !
-zach
note: thanks much! I just tried, dumpbin, which says "anonymous object" and doesn't return info about the object. this might be the reason why....
note 2, after checking out the above link, removing LTCG (link time code generation - /GL) the .obj files are much smaller and dumpbin understands them. thanks again !!
I am not a Visual Studio expert by any stretch of imagination, having hardly used it, but I believe Visual Studio employs link-time optimizations, which can make the resulting code run faster, but can cost a lot of space in the libraries. Also, it may be (I don't know the internals) that debugging information isn't stripped until the actual linking phase.
I'm sure someone's going to come with a better/more detailed answer anyway.
Possibly the difference is debug information.
The compiler outputs the debug information into the .obj, but the linker does not put that data into the .exe or .dll. It is either discarded or put into a .pdb.
In any case use the Visual Studio DUMPBIN utility on the .obj files to see what's in them.
Object files need to contain sufficient information for linking. In C++, this is name-based. Two object files refer to the same object (data/function/class) if they use the same name. This implies that all object files must contain names for all objects that might be referenced by other object files. The executable however will need the names visible from outside the library. In case of a DLL, this means only the names exported. The saving is twofold: there are less names, and those names are present only once in the DLL.
Modern C++ libraries will use namespaces. These namespaces mean that object names become longer, as they include the names of the encapsulating namespaces too.
The compiled library obj files will be huge because they must contain all of the functions, classes and template that your end users might eventually use.
Executables which link to your library will be smaller because they will include only the compiled code that they require to run. This will usually be a tiny subset of the library.