How to change the amount of building threads in Xcode? - c++

I'm building a couple of C++ files in xcode that take a lot of memory to compile (+1 GB / file). Because I do this on my dual core laptop, xcode uses 2 threads for building. The two threads will eventually be building the files that take a lot of memory simultaneously so the system suffers memory starvation and the compilation grinds to a near halt.
A sufficient solution for me would be to force Xcode to use only one build thread. Does anybody know a way to change how many build threads Xcode uses?
For those who are interested, the C++ files contain a sizable boost::spirit::qi parser.

The number of threads Xcode is using to perform tasks is controlled by PBXNumberOfParallelBuildSubtasks option. You can change it with the following command: - defaults write com.apple.Xcode <key> <value>. For example:
defaults write com.apple.Xcode PBXNumberOfParallelBuildSubtasks 8
See Xcode User Defaults for more details.
There are also many other ways to speed up a compilation, from precompiled headers to distributed builds. Read Reducing Build Times for more information on this.
Good luck!

With XCode 5, you can use -parallelizeTargets and -jobs NUMBER with xcodebuild. According to xcodebuild --help:
-parallelizeTargets build independent targets in parallel
-jobs NUMBER specify the maximum number of concurrent build operations

For Xcode 4 you must set the IDEBuildOperationMaxNumberOfConcurrentCompileTasks user default, for example:
defaults write com.apple.dt.Xcode IDEBuildOperationMaxNumberOfConcurrentCompileTasks 4
Note the "dt". This won't affect xcodebuild on the command line. To do that, use something like
xcodebuild -IDEBuildOperationMaxNumberOfConcurrentCompileTasks=4 ...
(See http://lists.apple.com/archives/xcode-users/2011/Apr/msg00403.html and http://lists.apple.com/archives/xcode-users/2011/Jul//msg00377.html )

A single build task should never do the same work twice, and certainly not simultaneously! Factor out the massive chunk of common code into a static library so it can be recompiled only when it changes. Set a target dependency in your application on the static library and link in the static library product. Changes to the rest of your application will then no longer require rebuilding the static library, which should speed up build times tremendously.
Try to exhaust all project-level solutions before manipulating Xcode as a whole. It is too easy to cripple Xcode to using only a single thread and forget to change it back when you move on to a new project. The Xcode User Default Reference documents many options that are not exposed via the Preferences interface, including:
PBXNumberOfParallelBuildSubtasks (positive integer)
This allows you to limit Xcode to using only n build threads on every project it compiles.
BuildSystemCacheSizeInMegabytes (positive integer, default 1024)
BuildSystemCacheMinimumRemovalAgeInHours (positive integer, default 24)
Upping the PCH cache size and retention time could help speed up your builds.

Related

Run own code elevated at will from non-elevated plugin DLL

I am making a suite of 64-bit plugin DLLs for a Windows host application using Visual Studio/C++, and from the current version onward, the setup.exe that they come in creates a single shared user-writable folder under ProgramData in which I cache all sorts of (non-user specific) data files. Older versions didn't have that folder yet.
However, the distribution of my plugin binaries is often out of my hands too. They are repackaged by a 3rd party bundle which can only do dumb file copies of the DLLs (so no real setup.exe functionality I need like creating folder + set permissions). And since my binary DLLs are all 100% self-contained, users also historically have a hand of just copying the DLLs around to other machines as they see fit, but that ofc also lacks the new folder setup phase.
I am looking into a workaround to have my DLLs create the folder at runtime if it is missing. I know I can't elevate the host process in-place whenever I want, but I thought of the following ways:
Have an extra "FixSetup" entry point in my DLL, and when the need arises, start an elevated RunDLL32.exe and let it use this entry point in my DLL.However, I see all sorts of people all over the place talking about RunDLL being as good as deprecated and advising against using it, but then again that was already since Windows XP and it's still with us. I also hear of RunDLL having it's own runtime context which can change with every Windows release (like switching to high-DPI aware when that came available), and that it thus is a 'hostile' environment to run in (read it on Raymond Chan's blog IIRC). Should I really be afraid of using it, or is my use case so simple it can barely break? (no GUI, just a wrapped CreateDirectory call)
Create a small "FixSetup.exe" which just does the folder creation, package it into my DLL's resources, and extract-to-temp + run-elevated it at runtime.While this would bloat my DLLs (depending on how small I can get the .exe), I feel like it's also a more fragile + convoluted solution than 1. above (with file extraction and all; prob. best to sign the utility exe too to keep HIPS / antivirus from acting funny etc?).
Alter my DLLs so that they're actually .exes in disguise which happen to export the host-expected DLL entry points, so that I can call them directly (elevated).I know there are some major caveats here (like conflicts between the C runtime being included in DLL or non-DLL mode, Visual Studio prob. not approving of these shenanigans, etc.), and honestly I already feel I need a shower just after talking about this one. So while theoretically maybe feasible, it is my last resort.
Does anyone have any advise on my uncertainties above? Or maybe an even better suggestion?
EDIT
I've already managed to get option 1. working, and while it works seamlessly there's one drawback I spotted: the UAC prompt (understandably) asks whether the user wants to run RunDLL32.exe, signed by Microsoft. This might confuse/scare people no end (that is: if they even read these prompts...). I'd rather have the UAC prompt asking about MyPluginSetup.exe signed by MyCompany, so now I'm more inclined to go with option 2. instead.

Reduce size of tlog files produced by compiler

Since our build on the build server is more and more slowing down I tried to find out what could be the cause. It seems it is mostly hanging in the disk IO operations of the .tlog files since there is no CPU load and still the build hangs. Even with a project containing only 10 cpp files, it generates ~5500 rows in the CL.read.1.tlog file.
The suspicious thing is that the file contains the same headers over and over, especially boost headers which take up like 90% of the file.
Is this really the expected behavior that those files are so big and have redundant content or is maybe a problem triggered from our source code? Are there maybe cyclic includes or too many header includes that can cause this problem?
Update 1
After all the comments I'll try to clarify some more details here.
We are only using boost by including the headers and linking the already compiled libs, we are not compiling boost itself
Yes, SSD is always a nice improvement, but the build server is hosted by our IT and we do not have SSDs available there. (see points below)
I checked some perfcounters especially via perfmon during the compilation. Whereas the CPU and Memory load are negligible most of the time, the disk IO counters and also queue sizes are quite high all the time. Disk Activity - Highest Active time is constantly on 100& and if I sort the Total (B/sec) it's all full with tlog files which read/write a lot of data to the disk.
Even if 5500 lines of tlog seem okay in some cases, I wonder why the exact same boost headers are contained over and over. Here a logfile where I censored our own headers.
There is no Antivirus influencing. I stopped it for my investigations since we know that it influences our compilation even more.
On my local developer machine with SSD it takes ~16min to build our whole solution, whereas on our build server with a "slower" disk it takes ~2hrs. CPU and memory are comparable. The 5500 line file just was an example from a single project within a solution of 20-30 projects. We have a project where we have ~30MB tlog files with ~60.000 lines in it, only this project takes half of the compilation duration.
Of course there is some basic CPU load on the machine during compilation. But it is not comparable to other developer machines with SSDs.
Our .net solution with 45 projects is finished in 12min (including setup project with WiX)
As on developer machines with SSDs we have at least a reduction from 2hrs to 16mins with a comparable CPU/memory configuration my assumption for the bottle neck was always the hard disk. Checking for disk related operations lead me to the tlog files since they caused the highest disk activity according to permon.

Minimizing the size of debugging information for testing at a remote location

I am trying to create a way to transfer the debug information of a C++ project to a remote location for testing. In the current development cycle, small changes to the code require the entire binary (100s MB in size and mostly debug info) to be transferred.
Currently my approach to addressing this is by splitting the debugging information from the object files (the size of which without the debugging info is manageable on my connection) using -gsplit-dwarf and then diffing the debug files against a copy of the build currently on the remote box.
The aim is to have a set of patches for the debug files of a project so that new code can be debugged at a remote location. The connection between the remote location and the local machine is slow and so minimization of the size of the patches is paramount but it should also be balanced with the run time of the tool. I have looked into bsdiff and xdelta as potential solutions and have run into a conundrum where xdetla is fast but too large and bsdiff is perfect in terms of size but the run time and memory requirements are a little higher than I would like it to be.
Is there a tool or approach I am missing or am I just going about this the wrong way? Some alternative to bsdiff and xdelta perhaps? I know that a tool like gbdserver won't work in this situation because of some of the requirements we have with the actual debugging. Could some alteration of bsdiff help the performance? And indeed if the approach I'm using is sound, what would be a good way to keep a copy of the build on the remote machine around to diff against.
The simplest way is to use "strip" to copy the debuginfo into a separate ".debug" file, and then use "strip" again to remove the debug info from the executable that you will deploy. The "strip" manual explains how to do this, look for the "--only-keep-debug" option.
After you do this, you can tell gdb about the separate debug info in various ways. The very best way is to use the "build-id" feature. This is what modern Linux distros do. However there are other ways as well. There's a whole section in the gdb manual about separate debug files.
The key point here is that you can start gdb on the stripped executable and it will find the separate debug info automatically. This data can all be local, so you won't need to deploy the debug info.
If you still care about shrinking debug info even when this is done, you can look at the "dwz" tool. This is a DWARF compressor. However this usually only matters if you plan to ship the debug info somewhere -- distros use it to make it easier to download debug info, but ordinary users won't really see the need.

What's the recommended Eclipse CDT configuration for big C++ project (indexer takes forever)

I'm working on some legacy C++ code written using "vi" and "emacs" and I am trying to build an eclipse CDT setup to maintain it (on linux). The two main problems I've been facing are that the indexing takes very long (over 4h) and that even once that's finished, eclipse is barely responsive.
The code base is structured in a "3-4 level deep" manner:
/system/${category}/${library}/
/server/${serverName}/${component}/
Example:
/system/CORE/CommandLine/*.cpp
/system/CORE/Connection/*.cpp
...
/server/Authentication/DB/Objects/*.cpp
/server/Authentication/Main/*.cpp
There are about 200 "modules" under /system/* and around 50 under /server/Authentication/*.
There is also an amazingly convoluted make system with 20 years worth of make-code written by people who wanted to showoff their make abilities :-)
I've tried two approaches so far
1) Two eclipse cdt projects, namely /system and /Authentication
2) One eclipse cdt project per "module" ending up with +200 modules. I even calculated dependencies between modules.
In both approaches, indexing takes very long. On approach 1) I get quite a few problems with non-resolved dependencies. With approach 2) eclipse is barely responsive, when I ctrl+click a function I can go for a coffee and come back before it responds...
Anyone out there has worked with big projects like these? What do you suggest?
General recommendation here is to provide more RAM for Eclipse. First, you will need to tweak your eclipse.ini configuration file as the default one is not suitable for big projects. Here is my eclipse.ini file:
-startup
plugins/org.eclipse.equinox.launcher_1.2.0.v20110502.jar
--launcher.library
plugins/org.eclipse.equinox.launcher.win32.win32.x86_64_1.1.100.v20110502
-product
org.eclipse.epp.package.cpp.product
--launcher.defaultAction
openFile
--launcher.XXMaxPermSize
256M
-showsplash
org.eclipse.platform
--launcher.XXMaxPermSize
256m
--launcher.defaultAction
openFile
-vmargs
-Dosgi.requiredJavaVersion=1.5
-Xms512M
-Xmx4096M
-XX:PermSize=256M
-XX:MaxPermSize=512M
Here I used -Xmx4096M to provide 4Gb of RAM.
To improve responsiveness you will also need to configure Indexer Cache limits. I recommend to increase all parameters by 2-3 times, depending on project size.
Using the Project resource filters helped me a lot.
I removed from the project tree folders which I didn't want either to modify or to submit to indexing.
To create a new filter just right click on the project and then open the Properties panel then reach Resource -> Resource Filters
http://help.eclipse.org/helios/index.jsp?topic=/org.eclipse.platform.doc.user/concepts/resourcefilters.htm
Sometimes if your project sources are too big (ex: about 5GB ) you need to use a filter otherwise the indexing process never end correctly.
-Xss8g on eclipse.ini was also needed on Neon to prevent stack overflow.
Also consider ulimit -Sv unlimited.
Tested on Ubuntu 14.04.

Versioning executable and modifying it in runtime

What I'm trying to do is to sign my compiled executable's first 32 bytes with a version signature, say "1.2.0" and I need to modify this signature in runtime, keeping in mind that:
this will be done by the executable itself
the executable resides on the client side, meaning no recompilation is possible
using an external file to track the version instead of encoding it in the binary itself is also not an option
the solution has to be platform-independent; I'm aware that Windows/VC allows you to version an executable using a .rc resource, but I'm unaware of an equivalent for Mac (maybe Info.plist?) and Linux
The solution in my head was to write the version signature in the first or last 32 bytes of the binary (which I didn't figure out how to do yet) and then I'll modify those bytes when I need to. Sadly it's not that simple as I'm trying to modify the same binary that I'm executing.
If you know of how I can do this, or of a cleaner/mainstream solution for this problem, I'd be very grateful. FWIW, the application is a patcher/launcher for a game; I chose to encode the version in the patcher itself instead of the game executable as I'd like it to be self-contained and target-independent.
Update: from your helpful answers and comments, I see that messing with the header/footer of the binary is not the way to go. But regarding the write permission for the running users, the game has to be patched one way or another and the game files need to be modified, there's no way to circumvent that: to update the game, you'll need admin privileges.
I would opt for using an external file to hold the signature, and modify that with every update, but I can't see how I can guard against the user spoofing with that file: if they mess up the version numbers, how can I detect which version I'm running?
Update2: Thanks for all your answers and comments, in truth there are 2 ways to do this: either use an external resource to track the version or embed it in the main application's binary itself. I could choose only 1 answer on SO so I did the one I'm going with, although it's not the only one. :-)
Modern Windows versions will not allow you to update an installed program file unless you're running with administrator privileges. I believe all versions of Windows block modifications to a running file altogether; this is why you're forced to reboot after an update. I think you're asking for the impossible.
This is going to be a bit of a challenge, for a number of reasons. First, writing to the first N bytes of the binary is likely to step on the binary file's header information, which is used by the program loader to determine where the code & data segments, etc. are located within the file. This will be different on different platforms (see the ELF format and executable format comparison)--there are a lot of different binary format standards.
Assuming you can overcome that one, you're likely to run afoul of security/antivirus systems if you start modifying a program's code at runtime. I don't believe most current operating systems will allow you to overwrite a currently-running executable. At the very least, they might allow you to do so with elevated permissions--not likely to be present while gaming.
If your application is meant to patch a game, why not embed the version in there while you're at it? You can use a string like #Juliano shows and modify that from the patcher while the game is not running - which should be the case if you're currently patching anyways. :P
Edit: If you're working with Visual Studio, it's really easy to embed such a string in the executable with a #pragma comment, according to this MSDN page:
#pragma comment(user, "Version: 1.4.1")
Since the second argument is a simple string literal, it can be concatenated, and I'd have the version in a simple #define:
// somehwere
#define MY_EXE_VERSION "1.4.1"
// somewhere else
#pragma comment(user, "Version: " MY_EXE_VERSION)
I'll give just some ideas on how to do this.
I think it's not possible to change some arbitrary bytes in the executable without side effects. To overcome this, I would create some string in your source code, like:
char *Version = "Version: AA.BB.CC";
I don't know if this is a rule, but you can look for this string in your binary code (open it in a text editor and you will see). So, you search and change this bytes for your version number in the binary file. Probably, their position will vary each time you compile the application, so this it is possible only if that location is not a problem for you.
Because the file is being used (it's running), you have to launch an external program that would do this. After modifying the file, this external program could relaunch the original application.
The version will be stored in your binary code in some part. Is that useful? How will you retrieve the version number?