difference in compilation time between release and debug mode - c++

I have an sln file for compiling c source code. when I compile that in VS2008 in release mode it takes around 4 minutes to compile the code. But in debug mode it take only 1 minute to compile the code.
I didn't understand the difference in Release mode and debug mode.
Can anyone help me in this?

The optimizer is turned on by default in the Release configuration. Yes, it needs time to do its job. The linker is also not doing incremental links anymore, that can make a big difference.
You never really care about this, release builds are something you do when you're done or leave up to a build server.

When building in debug mode, all extra work the compiler does is adding debug information (to simplify, basically a table of all symbols), this is very simple and goes fast. When building in release mode, the compiler does lots of optimizations, and those can be quite time consuming if the code is non-trivial.

In release mode, the compiler spends much more effort working out optimisations - this is can be quite time-consuming, because it does similar things to a sudoku solver or chess engine - it tries a lot of different options to try to find the best one in this particular case.

Related

Visual Studio (C++) debugging in debug mode vs release mode

I'm new to windows development, and my programming skills are not that strong (I'm with EE background, major is semiconductor), but at least I understand the fundamental of C/C++.
Regarding Windows C++ project, I found that I can debug under both debug and release builds (by adding break points, and reading the value of variables) in visual studio. I did some research, and I found that as long as there is a PDB file, I can do the debugging. However, will the "debug-able" release build impact the performance?
I also read about disabling debug in visual C++ projects. If I disable debugging, will the performance of a release build be better than a debug-enabled release build?
Sorry for my broken English.
No, it makes no difference. The linker's /DEBUG option is simply turned off by default for the Release build. The PDB it generates isn't all that useful for debugging, the optimizer that's turned on for the Release build makes a big ole mess of your debugging session. You'll have trouble setting breakpoints on some statements, see single-stepping acting weird (the code highlight moving around unpredictably) and the debugger not being able to show you variable values.
Still, sometimes you really need the PDB file, invaluable when you get a minidump back. Recorded by a customer when your program crashed and burned a thousand miles away. You need to plan for that, pretty important to generate the PDBs and store them so you'll have them available when you analyze the minidump.
Enabling PDB generation doesn't affect code generation, so the performance of your Release code won't change if you enable PDBs.
(Do note that debugging of optimised code is not as reliable as debugging non-optimised code... you'll find that the current line seems to jump around, and that you can't always rely on the reported values of variables.)
A binary can be debugged in windows with or without a PDB file. A PDB is a database of sort which provides information to the debugger such as the name of locals, the type of locals, offset to source mapping, etc .... None of this is strictly necessary to debug it just makes it a whole lot nicer. If you were so inclined you could debug the assembly directly with no PDB.
Hence there is really no concept of "disabling debugging". Really it comes down to whether you build a Debug / Release build. A Debug build is generally much more debuggable than a Release build because the compiler will take care to keep interesting locals around and insert no-ops to make stepping nicer. Release builds are all about performance of the final output and will sacrifice easy debugging to achieve it

Why disable incremental linking in debug?

In MS Visual C++ 2008 is there any reason to disable incremental linking in debug builds?
From my limited reading enabling incremental builds gives me faster linking and edit & continue.
I'm at a loss to find any reason why you'd disable this great feature. What are disadvantages? Is it flakey?
EDIT:
I'm working with a solution with multiple projects (a handful of dlls linking to a couple of exes) and most (but not all) have incremental linking disabled in debug.
Where does the question come from? You just saw the option and decided to ask?
Generally it should work pretty well, and unless it doesn't - no reason to disable it. But sometimes the dependencies don't work properly and you need to rebuild all manually. If this happens often in your project - then you should disable it.
In complex solutions with many dependencies it sometimes can get flakey. For example, changing a file in library won't trigger relinking of the executable for whatever reason, or something like that. Obviously it's not a normal course of events, disabling the feature makes it easier to avoid the problem if it occurs.

Debugging c++ core files for released software

I'm trying to find a way to debug core files sent to me from released versions of my software (c++ code compiled with gcc). Ideally, I'd like to be able to deploy release builds, and keep debug builds on hand to use for debugging, so I have symbol tables, etc.
My problem is that (as I understand it) the debug and release builds are not guaranteed to be the same - so a core file from the field may just look like garbage when I fire up gdb and point to my debug executable.
Is there a way around this (and here's the catch) without impacting size or performance of my released software? It's a large application, and performance of the debug build is probably not acceptable to customers. I've looked at suggestions to build once (debug), then strip symbol tables and ship that as the release build, but I'm going to see a performance hit with that approach, won't I?
Does anyone have suggestions for things they've tried or currently use that address this problem?
Thanks!
You can compile and link with optimization turned on and still generate debug symbols (-O3 -g) and then extract the debug symbols. This way you'd have the debug symbols around but can ship without them, and you won't have a performance penalty or something. See How to generate gcc debug symbol outside the build target? on how to do that.

Partial builds versus full builds in Visual C++

For most of my development work with Visual C++, I am using partial builds, e.g. press F7 and only changed C++ files and their dependencies get rebuilt, followed by an incremental link. Before passing a version onto testing, I take the precaution of doing a full rebuild, which takes about 45 minutes on my current project. I have seen many posts and articles advocating this action, but wonder is this necessary, and if so, why? Does it affect the delivered EXE or the associated PDB (which we also use in testing)? Would the software function any different from a testing perspective?
For release builds, I'm using VS2005, incremental compilation and linking, precompiled headers.
The partial build system works by checking file dates of source files against the build results. So it can break if you e.g. restore an earlier file from source control. The earlier file would have a modified date earlier than the build product, so the product wouldn't be rebuilt. To protect against these errors, you should do a complete build if it is a final build. While you are developing though, incremental builds are of course much more efficient.
Edit: And of course, doing a full rebuild also shields you from possible bugs in the incremental build system.
The basic problem is that compilation is dependent on the environment (command-line flags, libraries available, and probably some Black Magic), and so two compilations will only have the same result if they are performed in the same conditions. For testing and deployment, you want to make sure that the environments are as controlled as possible and you aren't getting wacky behaviours due to odd code. A good example is if you update a system library, then recompile half the files - half are still trying to use the old code, half are not. In a perfect world, this would either error out right away or not cause any problems, but sadly, sometimes neither of those happen. As a result, doing a complete recompilation avoids a lot of problems associated with a staggered build process.
Hasn't everyone come across this usage pattern? I get weird build errors, and before even investigating I do a full rebuild, and the problem goes away.
This by itself seems to me to be good enough reason to do a full rebuild before a release.
Whether you would be willing to turn an incremental build that completes without problems over to testing, is a matter of taste, I think.
I would definitely recommend it. I have seen on a number of occasions with a large Visual C++ solution the dependency checker fail to pick up some dependency on changed code. When this change is to a header file that effects the size of an object very strange things can start to happen.
I am sure the dependency checker has got better in VS 2008, but I still wouldn't trust it for a release build.
The biggest reason not to ship an incrementally linked binary is that some optimizations are disabled. The linker will leave padding between functions (to make it easier to replace them on the next incremental link). This adds some bloat to the binary. There may be extra jumps as well, which changes the memory access pattern and can cause extra paging and/or cache misses. Older versions of functions may continue to reside in the executable even though they are never called. This also leads to binary bloat and slower performance. And you certainly can't use link-time code generation with incremental linking, so you miss out on more optimizations.
If you're giving a debug build to a tester, then it probably isn't a big deal. But your release candidates should be built from scratch in release mode, preferably on a dedicated build machine with a controlled environment.
Visual Studio has some problems with partial (incremental) builds, (I mostly encountered linking errors) From time to time, it is very useful to have a full rebuild.
In case of long compilation times, there are two solutions:
Use a parallel compilation tool and take advantage of your (assumed) multi core hardware.
Use a build machine. What I use most is a separate build machine, with a CruiseControl set up, that performs full rebuilds from time to time. The "official" release that I provide to the testing team, and, eventually, to the customer, is always taken from the build machine, not from the developer's environment.

What are the pros + cons of Link-Time Code Generation? (VS 2005)

I've heard that enabling Link-Time Code Generation (the /LTCG switch) can be a major optimization for large projects with lots of libraries to link together. My team is using it in the Release configuration of our solution, but the long compile-time is a real drag. One change to one file that no other file depends on triggers another 45 seconds of "Generating code...". Release is certainly much faster than Debug, but we might achieve the same speed-up by disabling LTCG and just leaving /O2 on.
Is it worth it to leave /LTCG enabled?
It is hard to say, because that depends mostly on your project - and of course the quality of the LTCG provided by VS2005 (which I don't have enough experience with to judge). In the end, you'll have to measure.
However, I wonder why you have that much problems with the extra duration of the release build. You should only hand out reproducible, stable, versioned binaries that have reproducible or archived sources. I've rarely seen a reason for frequent, incremental release builds.
The recommended setup for a team is this:
Developers typically create only incremental debug builds on their machines. Building a release should be a complete build from source control to redistributable (binaries or even setup), with a new version number and labeling/archiving the sources. Only these should be given to in-house testers / clients.
Ideally, you would move the complete build to a separate machine, or maybe a virtual machine on a good PC. This gives you a stable environment for your builds (includes, 3rd party libraries, environment variables, etc.).
Ideally, these builds should be automated ("one click from source control to setup"), and should run daily.
It allows the linker to do the actual compilation of the code, and therefore it can do more optimization such as inlining.
If you don't use LTCG, the compiler is the only component in the build process that can inline a function, as in replace a "call" to a function with the contents of the function, which is usually a lot faster. The compiler would only do so anyway for functions where this yields an improvement.
It can therefore only do so with functions that it has the body of. This means that if a function in the cpp file calls another function which is not implemented in the same cpp file (or in a header file that is included) then it doesn't have the actual body of the function and can therefore not inline it.
But if you use LTCG, it's the linker that does the inlining, and it has all the functions in all the of the cpp files of the entire project, minus referenced lib files that were not built with LTCG. This gives the linker (which becomes the compiler) a lot more to work with.
But it also makes your build take longer, especially when doing incremental changes. You might want to turn on LTCG in your release build configuration.
Note that LTCG is not the same as profile-guided optimization.
I know the guys at Bungie used it for Halo3, the only con they mentioned was that it sometimes messed up their deterministic replay data.
Have you profiled your code and determined the need for this? We actually run our servers almost entirely in debug mode, but special-case a few files that profiled as performance critical. That's worked great, and has kept things debuggable when there are problems.
Not sure what kind of app you're making, but breaking up data structures to correspond to the way they were processed in code (for better cache coherency) was a much bigger win for us.
I've found the downsides are longer compile times and that the .obj files made in that mode (LTCG turned on) can be really massive. For example, .obj files that might be 200-500k were about 2-3mb. It just to happened to me that compiling a bunch of projects in my chain led to a 2 gb folder, the bulk of which was .obj files.
I also don't see problems with extra compilation time using link-time code generation with the release build. I only build my release version once per day (overnight), and use the unit-test and debug builds during the day.