i use incredibuild for parallel compiling...
i also need parallel linking but i couldnt manage it for linking.
do you know is it possible to make it parallel?
if there is a way, could you tell me?
if not, do you know any other tools for this purpose?
i have too many projects and i need to link them on seperate machines..
Linking is not really suceptible to parallel processing because it is the natural serialisation point in the production of a executable:
multiple developers can write code in parallel, because code is in many different source files
compilers can compile code in parallel, because multiple instances of the compiler can take the many source files and produce many object files
the linker cannot (easily) work in parallel, because it takes many object files and produces a single executable
So I think you will be out of luck, certainly for the commonly used linkers such as MS and gcc.
IncrediBuild supports the ability to execute linking in parallel.
Go to Agent Settings->Visual Studio Builds->General->Allow linking steps to run in parallel
You can link two /projects/ in parallel.
You cannot link a single project in parallel. This is because Incredibuild is not a compiler or linker itself - it is just a coordinator on top of the existing VS tools. It spins up multiple instances of the compiler for different source files but the VS linker can only be invoked once to link an entire project.
I used Incredibuild for a while but it has some bugs with edge cases (e.g. ActiveX interop wrappers) that caused too much trouble. Add to this that Visual Studio can do multi-threaded compiles anyway makes it not worth the money. (Aside: it is undocumented, but you can do multi-threaded compile in VS2005 by adding /MP C++ project properties.)
There are some general setting suggestions on Improving link time with Incredibuild
You can also skip linking of static libs where you won't distribute them using Incredilink
We found that addition of a signing post build step would stop incredibuild from working on following projects, adding a comment to post build was supposed to help
rem IncrediBuild_AllowOverlap
See IncrediBuild_AllowOverlap doc
Related
I have been learning Chapel with small programs and they are working great. But as a program becomes longer, the compilation time also becomes longer. So I looked for the way to compile multiple files one by one, but not with success yet. By searching the internet, I found this and this pages, and the latter says
All of these incremental compilation features are enabled with the new --incremental flag in the Chapel compiler, which will be made available in Chapel 1.14.0 release.
Although the Chapel compiler on my computer accepts this option, it does not seem to generate any *.o (or *.a?) when compiling a file containing only a procedure (i.e. no main()). Is this because the above project is experimental...? In that case, can we expect this feature to be included in some future version of Chapel?
(Or, the word "incremental compilation" above is not what I'm expecting for usual compilers like GCC?)
My environment: Chapel-1.14.0 installed via homebrew on Mac OSX 10.11.6.
The Chapel implementation only fully compiles code that is used through the execution of the main() routine. As a starting foray, the incremental compilation project tried to minimize the executable difference between code compiled through normal compilation and code compiled with the --incremental flag. This was to ensure that the user would not encounter a different set of errors when developing in one mode than they would the other. As a consequence, a file containing only a procedure would not be compiled until a compilation attempt when that file/procedure was used.
The project you reference was an excellent first start but exposed many considerations to the team which we had not previously considered (including the one you have raised). We're still discussing the future direction of this feature, so it isn't entirely clear what that would entail. One possible expansion is "separate compilation", where code could be compiled into a .o or .a which could be linked to other programs. Again, though, this is still under discussion.
If you have thoughts on how this feature should develop, we would love to hear them via an issue on our Github page, or via our developers or users mailing lists.
I've long wondered when writing my C++ CMake files which is faster for the compiler:
Putting all my project's cpp files in a single shared .so library using add_library()
Using multiple libraries, one for each class / logical component.
Internet searching has not turned up any relevant articles about this curiosity so I thought I'd run a very simple experiment. I made two version of my project - one with a single library and one with two libraries. I found that, over runs, the single library version was 25% faster when compiling from scratch, and 1% faster when compiling after modifying a single line in one of the files. Granted, my experiments were not very thorough.
This is surprising to me - I've always thought having multiple libraries would speed up compiling of small changes in just one of the libraries, because the compiler wouldn't have to touch other libraries that did not depend on it.
Does anyone know a general performance rule about CMake libraries?
Can anyone explain why I see this difference?
Thanks!
Update:
I'm compiling on Ubuntu 14.04 with gcc 4.8.4
I have a rather complex SCons script that compiles a big C++ project.
This gcc manual page says:
The compiler performs optimization based on the knowledge it has of the program. Compiling multiple files at once to a single output file mode allows the compiler to use information gained from all of the files when compiling each of them.
So it's better to give all my files to a single g++ invocation and let it drive the compilation however it pleases.
But SCons does not do this. it calls g++ separately for every single C++ file in the project and then links them using ld
Is there a way to make SCons do this?
The main reason to have a build system with the ability to express dependencies is to support some kind of conditional/incremental build. Otherwise you might as well just use a script with the one command you need.
That being said, the result of having gcc/g++ optimize as the manual describe is substantial. In particular if you have C++ templates you use often. Good for run-time performance, bad for recompile performance.
I suggest you try and make your own builder doing what you need. Here is another question with an inspirational answer: SCons custom builder - build with multiple files and output one file
Currently the answer is no.
Logic similar to this was developed for MSVC only.
You can see this in the man page (http://scons.org/doc/production/HTML/scons-man.html) as follows:
MSVC_BATCH When set to any true value, specifies that SCons should
batch compilation of object files when calling the Microsoft Visual
C/C++ compiler. All compilations of source files from the same source
directory that generate target files in a same output directory and
were configured in SCons using the same construction environment will
be built in a single call to the compiler. Only source files that have
changed since their object files were built will be passed to each
compiler invocation (via the $CHANGED_SOURCES construction variable).
Any compilations where the object (target) file base name (minus the
.obj) does not match the source file base name will be compiled
separately.
As always patches are welcome to add this in a more general fashion.
In general this should be left up to the program developer. Trying to compile all together in an amalgamation may introduce unintended behaviour to the program if it even compiles in the first place. Your best bet if you want this kind of optimisation without editing the source yourself is to use a compiler with inter-process optimisation like icc -ipo.
Example where an amalgamation of two .c files would not compile is for example if they use two identical static symbols with different functionality.
I have a large binary which is built of many static libs and standalone cpp files. It is configured to use incremental linking, all optimizations are disabled by /Od - it is debug build.
I noticed that if I change any standalone cpp file then incremental linking runs fast - 1 min. But if I change any cpp in any static lib then it runs long - 10 min, the same time as ordinary linking. In this case I gain no benefit from incremental linking. Is it possible to speedup it? I use VS2005.
Set "Use Library Dependency Inputs" in the Linker General property page for your project. That will link the individual .obj files from the dependency .lib instead of the .lib, which may have some different side effects.
I going to give you a different type of answer. Hardware.
What is your development environment? Is there anyway to get more RAM or to put your project onto an Solid State Drive? I found that using a SSD sped up my link times by an order of magnitude on my work projects. Helped a little for compile times, but the linking was huge. Getting a faster system of course also helped.
If I understand correctly (after using Visual Stuio for some years), the incremental linking feature does not work for object files that is part of static libraries.
One way to solve this is to restructure your solution so that your application project contains all source files.
Now almost an every user have a 2 or 4 cores on desktop (and on high number of notebooks). Power users have 6-12 cores with amd or i7.
Which x86/x86_64 C/C++ compilers can use several threads to do the compilation?
There is already a 'make -j N'-like solutions, but sometimes (for -fwhole-program or -ipo) there is the last big and slow step, which started sequentially.
Does any of these can: GCC, Intel C++ Compiler, Borland C++ compiler, Open64, LLVM/GCC, LLVM/Clang, Sun compiler, MSVC, OpenWatcom, Pathscale, PGI, TenDRA, Digital Mars ?
Is there some higher limit of thread number for compilers, which are multithreaded?
Thanks!
Gcc has -flto=n or -flto=jobserver to make the linking step (which with LTO does optimization and code generation) parallel. According to the documentation, these have been available since version 4.6, although I am not sure how good it was in those early versions.
Some build systems can compile independent modules in parallel, but the compilers themselves are still single-threaded. I'm not sure there is anything to gain by making the compiler multi-threaded. The most time-consuming compilation phase is processing all the #include dependencies and they have to be processed sequentially because of potential dependencies between the various headers. The other compilation phases are heavily dependent on the output of previous phases so there's little benefit to be gained from parallelism there.
Newer Visual Studio versions can compile distinct translation units in parallel. It helps if your project uses many implementation files (such as .c, .cc, .cpp).
MSDN Page
It is not really possible to multi-process the link stage. There amy be some degree of multi-threading possible but it is unlikely to give much of a performance boost. As such many build systems will simply fire off a seperate process for seperate files. Once they are all compiled then it will, as you note, perform a long single threaded link. Alas, as I say, there is precious little you can do about this :(
Multithreaded compilation is not really useful as build systems (Make, Ninja) will start multiple compilation units at once.
And as Ferrucio stated, concurrent compilation is really difficult to implement.
Multithreaded linking can though be useful (concurrent .o/.a reading and symbol resolution.) as this will most likely be the last build step.
Gnu Gold linker can be multithreaded, with the LLVM ThinLTO implementation:
https://clang.llvm.org/docs/ThinLTO.html
Go 1.9 compiler claims to have:
Parallel Compilation
The Go compiler now supports compiling a package's functions in parallel, taking advantage of multiple cores. This is in addition to the go command's existing support for parallel compilation of separate packages.
but of course, it compiles Go, not C++
I can't name any C++ compiler doing likewise, even in October 2017. But I guess that the multi-threaded Go compiler shows that multi-threaded C or C++ compilers are (in principle) possible. But they are few of them, and making new ones is a huge work, and you'll practically need to start such an effort from scratch.
For Visual C++, I am not aware whether it does any parallel compilation (I don't think so). For versions later than Visual Studio 2005 (i.e. Visual C++ 8), the projects within a solution are built in parallel as far as is allowed by the solution dependency graph.