the meaning of visual studio /Z7 [duplicate] - c++

Background
There are several different debug flags you can use with the Visual Studio C++ compiler. They are:
(none)
Create no debugging information
Faster compilation times
/Z7
Produce full-symbolic debugging information in the .obj files using CodeView format
/Zi
Produce full-symbolic debugging information in a .pdb file for the target using Program Database format.
Enables support for minimal rebuilds (/Gm) which can reduce the time needed for recompilation.
/ZI
Produce debugging information like /Zi except with support for Edit-and-Continue
Issues
The /Gm flag is incompatible with the /MP flag for Multiple Process builds (Visual Studio 2005/2008)
If you want to enable minimal rebuilds, then the /Zi flag is necessary over the /Z7 flag.
If you are going to use the /MP flag, there is seemingly no difference between /Z7 and /Zi looking at MSDN. However, the SCons documentation states that you must use /Z7 to support parallel builds.
Questions
What are the implications of using /Zi vs /Z7 in a Visual Studio C++ project?
Are there other pros or cons for either of these options that I have missed?
Specifically, what is the benefit of a single Program Database format (PDB) file for the target vs multiple CodeView format (.obj) files for each source?
References
MDSN /Z7, /Zi, /ZI (Debug Information Format)
MSDN /MP (Build with Multiple Processes)
SCons Construction Variables CCPDBFLAG
Debug Info

Codeview is a much older debugging format that was introduced with Microsoft's old standalone debugger back in the "Microsoft C Compiler" days of the mid-1980s. It takes up more space on disk and it takes longer for the debugger to parse, and it's a major pain to process during linking. We generated it from our compiler back when I was working on the CodeWarrior for Windows in 1998-2000.
The one advantage is that Codeview is a documented format, and other tools can often process it when they couldn't deal with PDB-format debug databases. Also, if you're building multiple files at a time, there's no contention to write into the debug database for the project. However, for most uses these days, using the PDB format is a big win, both in build time and especially in debugger startup time.

One advantage of the old C7 format is that it's all-in-one, stored in the EXE, instead of a separate PDB and EXE. This means you can never have a mismatch. The VS dev tools will make sure that a PDB matches its EXE before it will use it, but it's definitely simpler to have a single EXE with everything you need.
This adds new problems of needing to be able to strip debug info when you release, and the giant EXE file, not to mention the ancient format and lack of support for other modern features like minrebuild, but it can still be helpful when you're trying to keep things as simple as possible. One file is easier than two.
Not that I ever use C7 format, I'm just putting this out there as a possible advantage, since you're asking.
Incidentally, this is how GCC does things on a couple platforms I'm using. DWARF2 format buried in the output ELF's. Unix people think they're so hilarious. :)
BTW the PDB format can be parsed using the DIA SDK.

/Z7 keeps the debug info in the .obj files in CodeView format and lets the linker extract them into a .pdb while /Zi consolidates it into a common .pdb file during compilation already by sync'ing with mspdbsrv.exe.
So /Z7 means more file IO, disc space being used and more work for the linker (unless /DEBUG:FASTLINK is used) as there is lots of duplicate debug info in these obj files. But it also means every compilation is independent and thus can actually still be faster than /Zi with enough parallelization.
By now they've improved the /Zi situation though by reducing the inter-process communication with mspdbsrv.exe: https://learn.microsoft.com/en-us/cpp/build/reference/zf
Another use-case of /Z7 is for "standalone" (though larger) static libraries that don't require shipping a separate .pdb if you want that. That also prevents the annoying issues arising from the awful default vcxxx.pdb name cl uses as long as you don't fix it with a proper https://learn.microsoft.com/en-us/cpp/build/reference/fd-program-database-file-name, which most people forget.
/ZI is like /Zi but adds additional data etc. to make the Edit and Continue feature work.

There is one more disadvantage for /Z7:
It's not compatible with incremental linking, which may alone be a reason to avoid it.
Link: http://msdn.microsoft.com/en-us/library/4khtbfyf%28v=vs.100%29.aspx
By the way: even though Microsoft says a full link (instead of an incremental) is performed when "An object that was compiled with the /Yu /Z7 option is changed.", it seems this is only true for static libraries build with /Z7, not for object files.

Another disadvantage of /Z7 is the big size of the object files. This has already been mentioned here, however this may escalate up to the point where the linker is unable to link the executable because it breaks the size limit of the linker or the PE format (it gives you linker error LNK1248). It seems Visual Studio or the PE format has a hard limit of 2GB (also on x64 machines). When building a debug version you may run into this limit. It appears this does not only affect the size of the final compiled executable, but also temporary data. Only Microsoft knows about the linker internals, but we ran into this problem here (though the executable was of course not 2gigs large, even in debug). The problem miraculously went away and never came back when we switched the project to /ZI.

Related

How to compile WebRTC properly with Visual Studio in debug?

I'm trying to compile WebRTC, but because we use a number of libraries, some of which are closed source and beyond our control, how it gets compiled is rather sensitive to match. I've already had to edit the build/config/win/BUILD.gn script to use /MDd and /MD build flags instead of /MTd and /MT respectively, as we use the multi-threaded DLL runtime. To build, we run
gn gen out/Debug --args="is_debug=true is_clang=false use_lld=false visual_studio_version=2019"
ninja -C out/Debug
However, when linking against webrtc.lib, it fails with multiple errors citing a mismatch between _ITERATOR_DEBUG_LEVEL. I've seen this error plenty, it happens when linking a release-built library (_ITERATOR_DEBUG_LEVEL=2) with a debug executable (_ITERATOR_DEBUG_LEVEL=0). However that's clearly not how I've compiled it. I've tried adding /DEBUG (which should be implied by /MDd as far as I know) but it produces an identical library with the same issue. I've confirmed checking the generated .ninja scripts that these arguments are in the cflags.
Is there a way to get ninja to properly observe the debug flags?
I had the same issue. Although, WebRTC is a powerful library, it looks terrible for native development. Neither good documentation nor examples, especially for using outside Google sources.
Please, try this one argument, that helped me:
enable_iterator_debugging=true
I don't deal with libwebrtc myself, but I have heard that long term you might have a better experience pulling out all the files and using your own build system. Orchid did this, but I haven't looked at it myself.
There are other C/C++ WebRTC implementations if you are doing DataChannels only that might be helpful also!

C++ Compiler - Interaction between /Zi and /Od?

In a release build on Visual Studio 2008 (though this probably applies to any Microsoft compiler), I can set /Zi ("Produces a program database (PDB) that contains type information and symbolic debugging information for use with the debugger. The symbolic debugging information includes the names and types of variables, as well as functions and line numbers.") and also /Ox ("Full Optimization").
These two flags are valid together, but seem to be in conflict. One of the things the compiler might choose to do to fully optimize the code, is to rearrange code and change which code is in which line, but that screws up the information that the program database is compiling.
When I have both flags, which one "wins"? Does my code get fully optimized, and some line numbers are not available? Or does the optimizer get set to "less-than-full", so that it leaves line numbers alone?

Difference between /Z7 and /Zi regarding 'debuggability'

I am managing a big application with about xxx source files (Visual Studio 2010).
Since quite some years we compile our release builds with /Zi to get a PDB file which is stored in our symbol server. Over the years we noticed that our build scripts got slower and slower, and now it takes more than 2 hours to build the executable.
Questions like What are the implications of using /Zi vs /Z7 for Visual Studio C++ projects? seem to indicate that /Z7 is an old format, and that /Zi is preferred.
Nevertheless we tried executing our build scripts with /Z7 and we saw a big time reduction from over 2 hours to about 20 minutes.
We also experimented with using /Zi but 1 PDB file per source file (which the linker still merges together in one big PDB file) and this also improves compilation performance, but slightly decreases link time performance.
To optimize build times I would like to switch back to /Z7 (the linker still generates a PDB file in the end), but I'm not sure whether this will have an impact on the 'debuggability' of the application.
Questions:
Is the internal debugging format of the PDB file generated by the linker different when compiled with /Z7, compared to /Zi (maybe the format is the same, just the place where the debug info is stored by the compiler is different)?
Does /Z7 prevent some kinds of debugging compared to /Zi?
Which debug format (/Zi, /Z7) is advised in general for release builds?

Are there any disadvantages to "multi-processor compilation" in Visual Studio?

Are there any disadvantages, side effects, or other issues I should be aware of when using the "Multi-processor Compilation" option in Visual Studio for C++ projects? Or, to phrase the question another way, why is this option off by default in Visual Studio?
The documentation for /MP says:
Incompatible Options and Language Features
The /MP option is incompatible with some compiler options and language features. If you use an incompatible compiler option with the /MP option, the compiler issues warning D9030 and ignores the /MP option. If you use an incompatible language feature, the compiler issues error C2813then ends or continues depending on the current compiler warning level option.
Note:
Most options are incompatible because if they were permitted, the concurrently executing compilers would write their output at the same time to the console or to a particular file. As a result, the output would intermix and be garbled. In some cases, the combination of options would make the performance worse.
And it gives a table that lists compiler options and language features that are incompatible with /MP:
#import preprocessor directive (Converts the types in a type library into C++ classes, and then writes those classes to a header file)
/E, /EP (Copies preprocessor output to the standard output (stdout))
/Gm (Enables an incremental rebuild)
/showIncludes (Writes a list of include files to the standard error (stderr))
/Yc (Writes a precompiled header file)
Instead of disabling those other options by default (and enabling /MP by default), Visual Studio makes you manually disable/prevent these features and enable /MP.
From our experience the main issues found were:
browse information failing to build due to multiple projects calling bscmake at the same time (useless information nowadays so should be removed as a project setting)
linker failures due to dependency issues and build order issues, something you would not normally see when building normally
Batch builds do not take advantage of multi-processor compilation, at least this was certainly true for 2005-2008 VS editions
warnings generated about pre-compiled headers being incompatible, this occurs when you build stdafx and can be ignored but when doing a rebuild it generates this message
However, the above are configuration issues which you can resolve, otherwise it should be enabled as it will speed up builds.
Because multi-processor compilation isn't compatible with many other compilation options and also has higher usage of system resources. It should be up to the developer to decide whether or not it's worth for him. You can find the full documentation here: http://msdn.microsoft.com/en-us/library/bb385193.aspx
While using /MP will bring some benefit to the compilation speed, there is still some performance left on the table due to the way workload is scheduled: https://randomascii.wordpress.com/2014/03/22/make-vc-compiles-fast-through-parallel-compilation/
The compiler receives jobs in "batches" (a set of source files passed to compiler) and will only start the next batch when the prior one is finished. That means there are cycles left unused on other cores until the longest translation unit is compiled. There is no sharing of data between the compiler subprocesses.
To improve utilization on multiple cores even further I suggest switching to ninja. I've implemented it in a few projects and it was always a win, e.g. https://github.com/openblack/openblack/pull/68#issuecomment-529172980

Release mode static library much larger than debug mode version

today i found out that the compiled static library i'm working on is much larger in Release mode than in Debug. I found it very surprising, since most of the time the exact opposite happens (as far as i can tell).
The size in debug mode is slightly over 3 MB (its a fairly large project), but in release it goes up to 6,5 MB. Can someone tell me what could be the reason for this? I'm using the usual Visual Studio (2008) settings for a static library project, changed almost nothing in the build configuration settings. In release, i'm using /O2 and "Favor size or speed" is set to "Neither". Could the /O2 ("Maximize speed") cause the final .lib to be so much larger than the debug version with all the debugging info in it?
EDIT:
Additional info:
Debug:
- whole program optimization: No
- enable function level linking: No
Release:
- whole program optimization: Enable link-time code generation
- enable function level linking: Yes
The difference is specifically because of link-time code generation. Read the chapter Link-Time Code Generation in Compilers - What Every Programmer Should Know About Compiler Optimizations on MSDN - it basically says that with LTCG turned on the compiler produces much more data that is packed into the static library so that the linker can use that extra data for generating better machine code while actually linking the executable file.
Since you have LTCG off in Debug configuration the produced library is noticeably smaller since it doesn't have that extra data.
PS:
Original Link (not working at 11/09/2015)
The optimization could be the issue here, notably automatically created inline functions will be bigger but faster in release than debug.
Personally I've never seen a release PDB be larger than a debug PDB. Same deal for LIBs.