Penalty of the MSVS compiler flag /bigobj - c++

The basic Google search bigobj issue shows that a lot of people are experiencing the Fatal Error C1128: "number of sections exceeded object file format limit : compile with /bigobj". The error has more chance to occur if one heavily uses a library of C++ templates, like Boost libraries or CGAL libraries.
That error is strange, because it gives the solution to itself: set the compiler flag /bigobj!
So here is my question: why is not that flag set by default? There must be a penalty of using that flag, otherwise it would be set by default. That penalty is not documented in MSDN. Does anybody have a clue?
I ask the question because I wonder if the configuration system of CGAL should not set /bigobj by default.

The documentation does mention an important drawback to /bigobj:
Linkers that shipped prior to Visual C++ 2005 cannot read .obj files
that were produced with /bigobj.
So, setting this option by default would restrict the number of linkers that can consume the resulting object files. Better to activate it on a need-to basis.

why is not that flag set by default? There must be a penalty of using that flag, otherwise it would be set by default.
My quick informal experiment shows .obj files to be about 2% larger with /bigobj than without. So it's a small penalty but it's not zero.
Someone submitted a feature request to make /bigobj the default; see https://developercommunity.visualstudio.com/t/Enable-bigobj-by-default/1031214.

Related

Selectively disable C++ Core Guidelines Checker for third party libraries

I would like to try to use the Core Guidelines checker tool on a C++11/14 project, under VS2015.
In my code I use many libraries from Boost which trigger a lot of warning. I am not concerned by those warnings, since Boost is doing a lot of very clever work and the libraries were not written with the aim of conforming to the Guidelines, which they mostly predate.
But with such a flood of warnings I am unable to find out the real issues (at least according to the tool) in my code.
Is there a way to suppress all the warnings for third party code? Maybe there is some attribute before and after #including boost headers?
I have read this page from the Visual C++ Team blog but I have been unable to find it.
There's an undocumented environment variable, CAExcludePath, that filters warnings from files in that path. I usually run with %CAExcludePath% set to %Include%.
You can also use it from MSBuild, see here for an example (with mixed success): Suppress warnings for external headers in VS2017 Code Analysis
MSVC is working on something similar to GCC's system headers that should be a more comprehensive solution to this problem.
Currently, in VS, the feature to suppress warnings from third-party libraries are still experimental but certainly coming.
VS 2017 version 15.6 Preview 1 comes with a feature to suppress warnings from third-party libraries. In the following article, they use "external headers" as a term to refer to the headers from third-party libraries.
https://blogs.msdn.microsoft.com/vcblog/2017/12/13/broken-warnings-theory/
The above article basically says that
specify external headers
specify warning level for external headers
to suppress warnings from them. For example, if we have external headers in some_lib_dir directory and want to compile our code in my_prog.cpp which depends on the external headers, the following command should do the job.
cl.exe /experimental:external /external:I some_lib_dir /external:W0 /W4 my_prog.cpp
Note that /experimental:external is required because this is still an experimental feature, and the details of this feature may change in the future.
Anyway, we need to wait for the future release of Visual Studio.

the meaning of visual studio /Z7 [duplicate]

Background
There are several different debug flags you can use with the Visual Studio C++ compiler. They are:
(none)
Create no debugging information
Faster compilation times
/Z7
Produce full-symbolic debugging information in the .obj files using CodeView format
/Zi
Produce full-symbolic debugging information in a .pdb file for the target using Program Database format.
Enables support for minimal rebuilds (/Gm) which can reduce the time needed for recompilation.
/ZI
Produce debugging information like /Zi except with support for Edit-and-Continue
Issues
The /Gm flag is incompatible with the /MP flag for Multiple Process builds (Visual Studio 2005/2008)
If you want to enable minimal rebuilds, then the /Zi flag is necessary over the /Z7 flag.
If you are going to use the /MP flag, there is seemingly no difference between /Z7 and /Zi looking at MSDN. However, the SCons documentation states that you must use /Z7 to support parallel builds.
Questions
What are the implications of using /Zi vs /Z7 in a Visual Studio C++ project?
Are there other pros or cons for either of these options that I have missed?
Specifically, what is the benefit of a single Program Database format (PDB) file for the target vs multiple CodeView format (.obj) files for each source?
References
MDSN /Z7, /Zi, /ZI (Debug Information Format)
MSDN /MP (Build with Multiple Processes)
SCons Construction Variables CCPDBFLAG
Debug Info
Codeview is a much older debugging format that was introduced with Microsoft's old standalone debugger back in the "Microsoft C Compiler" days of the mid-1980s. It takes up more space on disk and it takes longer for the debugger to parse, and it's a major pain to process during linking. We generated it from our compiler back when I was working on the CodeWarrior for Windows in 1998-2000.
The one advantage is that Codeview is a documented format, and other tools can often process it when they couldn't deal with PDB-format debug databases. Also, if you're building multiple files at a time, there's no contention to write into the debug database for the project. However, for most uses these days, using the PDB format is a big win, both in build time and especially in debugger startup time.
One advantage of the old C7 format is that it's all-in-one, stored in the EXE, instead of a separate PDB and EXE. This means you can never have a mismatch. The VS dev tools will make sure that a PDB matches its EXE before it will use it, but it's definitely simpler to have a single EXE with everything you need.
This adds new problems of needing to be able to strip debug info when you release, and the giant EXE file, not to mention the ancient format and lack of support for other modern features like minrebuild, but it can still be helpful when you're trying to keep things as simple as possible. One file is easier than two.
Not that I ever use C7 format, I'm just putting this out there as a possible advantage, since you're asking.
Incidentally, this is how GCC does things on a couple platforms I'm using. DWARF2 format buried in the output ELF's. Unix people think they're so hilarious. :)
BTW the PDB format can be parsed using the DIA SDK.
/Z7 keeps the debug info in the .obj files in CodeView format and lets the linker extract them into a .pdb while /Zi consolidates it into a common .pdb file during compilation already by sync'ing with mspdbsrv.exe.
So /Z7 means more file IO, disc space being used and more work for the linker (unless /DEBUG:FASTLINK is used) as there is lots of duplicate debug info in these obj files. But it also means every compilation is independent and thus can actually still be faster than /Zi with enough parallelization.
By now they've improved the /Zi situation though by reducing the inter-process communication with mspdbsrv.exe: https://learn.microsoft.com/en-us/cpp/build/reference/zf
Another use-case of /Z7 is for "standalone" (though larger) static libraries that don't require shipping a separate .pdb if you want that. That also prevents the annoying issues arising from the awful default vcxxx.pdb name cl uses as long as you don't fix it with a proper https://learn.microsoft.com/en-us/cpp/build/reference/fd-program-database-file-name, which most people forget.
/ZI is like /Zi but adds additional data etc. to make the Edit and Continue feature work.
There is one more disadvantage for /Z7:
It's not compatible with incremental linking, which may alone be a reason to avoid it.
Link: http://msdn.microsoft.com/en-us/library/4khtbfyf%28v=vs.100%29.aspx
By the way: even though Microsoft says a full link (instead of an incremental) is performed when "An object that was compiled with the /Yu /Z7 option is changed.", it seems this is only true for static libraries build with /Z7, not for object files.
Another disadvantage of /Z7 is the big size of the object files. This has already been mentioned here, however this may escalate up to the point where the linker is unable to link the executable because it breaks the size limit of the linker or the PE format (it gives you linker error LNK1248). It seems Visual Studio or the PE format has a hard limit of 2GB (also on x64 machines). When building a debug version you may run into this limit. It appears this does not only affect the size of the final compiled executable, but also temporary data. Only Microsoft knows about the linker internals, but we ran into this problem here (though the executable was of course not 2gigs large, even in debug). The problem miraculously went away and never came back when we switched the project to /ZI.

Linking taking too long with /bigobj

I am using Visual Studio 2012 to compile a program in debug mode. The StylesDatabase.cpp and LanguagesDatabase.cpp used to compile fine without /bigobj ... since I removed some functions and shifted some functions from protected to public.
Both the C++ files are fairly small but use templated container classes like Boost.MultiIndex(es), Boost.Unordered(_maps) and Wt::Dbo::ptrs. Wt::Dbo::ptr is a pointer to a database object and Wt::Dbo is an ORM library.
After this change, the compiler fails asks me to set /bigobj. After I set /bigobj the compiler works fine, however the linker was taking more than 30 minutes.
So my question is:
How come a fairly small file can exceed the limit of Visual C++? What exactly causes the limit to be exceeded.
How can I prevent the limit to be exceeded without splitting the cpp files?
Why is the linker taking so much time?
I can provide the source if its necessary.
Your files are not the only ones that the linker has to handle - it has to deal also with library files, and in your case these are the Boost template libraries that requires /bigobj flag. Take a look at this Microsoft page: http://msdn.microsoft.com/en-US/library/ms173499.aspx. Even if your files are small, heavily-templated libraries may require you to use /bigobj anyway.
You can think about it that way: somebody had to produce a lot of code so that you can produce much less code writing your program, but this code produced by someone else is there and has to be dealt with at some point as well.

Are there any disadvantages to "multi-processor compilation" in Visual Studio?

Are there any disadvantages, side effects, or other issues I should be aware of when using the "Multi-processor Compilation" option in Visual Studio for C++ projects? Or, to phrase the question another way, why is this option off by default in Visual Studio?
The documentation for /MP says:
Incompatible Options and Language Features
The /MP option is incompatible with some compiler options and language features. If you use an incompatible compiler option with the /MP option, the compiler issues warning D9030 and ignores the /MP option. If you use an incompatible language feature, the compiler issues error C2813then ends or continues depending on the current compiler warning level option.
Note:
Most options are incompatible because if they were permitted, the concurrently executing compilers would write their output at the same time to the console or to a particular file. As a result, the output would intermix and be garbled. In some cases, the combination of options would make the performance worse.
And it gives a table that lists compiler options and language features that are incompatible with /MP:
#import preprocessor directive (Converts the types in a type library into C++ classes, and then writes those classes to a header file)
/E, /EP (Copies preprocessor output to the standard output (stdout))
/Gm (Enables an incremental rebuild)
/showIncludes (Writes a list of include files to the standard error (stderr))
/Yc (Writes a precompiled header file)
Instead of disabling those other options by default (and enabling /MP by default), Visual Studio makes you manually disable/prevent these features and enable /MP.
From our experience the main issues found were:
browse information failing to build due to multiple projects calling bscmake at the same time (useless information nowadays so should be removed as a project setting)
linker failures due to dependency issues and build order issues, something you would not normally see when building normally
Batch builds do not take advantage of multi-processor compilation, at least this was certainly true for 2005-2008 VS editions
warnings generated about pre-compiled headers being incompatible, this occurs when you build stdafx and can be ignored but when doing a rebuild it generates this message
However, the above are configuration issues which you can resolve, otherwise it should be enabled as it will speed up builds.
Because multi-processor compilation isn't compatible with many other compilation options and also has higher usage of system resources. It should be up to the developer to decide whether or not it's worth for him. You can find the full documentation here: http://msdn.microsoft.com/en-us/library/bb385193.aspx
While using /MP will bring some benefit to the compilation speed, there is still some performance left on the table due to the way workload is scheduled: https://randomascii.wordpress.com/2014/03/22/make-vc-compiles-fast-through-parallel-compilation/
The compiler receives jobs in "batches" (a set of source files passed to compiler) and will only start the next batch when the prior one is finished. That means there are cycles left unused on other cores until the longest translation unit is compiled. There is no sharing of data between the compiler subprocesses.
To improve utilization on multiple cores even further I suggest switching to ninja. I've implemented it in a few projects and it was always a win, e.g. https://github.com/openblack/openblack/pull/68#issuecomment-529172980

Release mode static library much larger than debug mode version

today i found out that the compiled static library i'm working on is much larger in Release mode than in Debug. I found it very surprising, since most of the time the exact opposite happens (as far as i can tell).
The size in debug mode is slightly over 3 MB (its a fairly large project), but in release it goes up to 6,5 MB. Can someone tell me what could be the reason for this? I'm using the usual Visual Studio (2008) settings for a static library project, changed almost nothing in the build configuration settings. In release, i'm using /O2 and "Favor size or speed" is set to "Neither". Could the /O2 ("Maximize speed") cause the final .lib to be so much larger than the debug version with all the debugging info in it?
EDIT:
Additional info:
Debug:
- whole program optimization: No
- enable function level linking: No
Release:
- whole program optimization: Enable link-time code generation
- enable function level linking: Yes
The difference is specifically because of link-time code generation. Read the chapter Link-Time Code Generation in Compilers - What Every Programmer Should Know About Compiler Optimizations on MSDN - it basically says that with LTCG turned on the compiler produces much more data that is packed into the static library so that the linker can use that extra data for generating better machine code while actually linking the executable file.
Since you have LTCG off in Debug configuration the produced library is noticeably smaller since it doesn't have that extra data.
PS:
Original Link (not working at 11/09/2015)
The optimization could be the issue here, notably automatically created inline functions will be bigger but faster in release than debug.
Personally I've never seen a release PDB be larger than a debug PDB. Same deal for LIBs.