Do Visual Studio 2012 updates break C++ ABI? - c++

When Microsoft initially released Visual Studio 2012 in September 2012, they announced their plan for providing updates for Visual Studio on a more regular basis. Since then, they have released Visual Studio 2012 Update 1 (Visual Studio 2012.1) in November 2012 and Visual Studio 2012 Update 2 (Visual Studio 2012.2) in April 2013.
My question is: Did the updates introduce any changes to the C++ ABI (with regard to the initial VS2012 version)? Is it safe to link .libs of different VS2012 versions?
I have searched the internet for a while and could not find any definite statement from Microsoft. Some sources mention that some bugs in the C++ code generation have been fixed but I suppose that does not imply an ABI change?

Stephan T. Lavavej, a key author of Visual C++'s STL implementation laid out the rules in this Reddit thread:
Here are the precise rules:
If you include any C++ Standard Library headers, you have to play by its rules, and we intentionally break binary compatibility between major versions (but preserve it between hotfixes and service packs). Any representation changes (including but not limited to adding/removing data members) break binary compatibility, which is why this always happens and why we jealously guard this right.
[snip]
So, if you're playing by the STL's rules, you need to ensure the following:
All object files and static libraries linked into a single binary (EXE/DLL) must be compiled with the same major version. We added linker checks, so that mismatching VS 2010+ major versions will trigger hard errors at link time, but if VS 2008 or earlier is involved we can't help you (no time machines). Because the ODR applies here, you really should be using the same toolset (i.e. same service pack level) for all of the object files and static libraries. For example, we fixed a std::string memory leak between VS 2010 RTM and SP1, but if you mix RTM and SP1, the resulting binary may or may not be affected by the leak. (Additionally, you need to be using the same _ITERATOR_DEBUG_LEVEL and release/debug settings; we have linker checks for these now.)
If you have multiple binaries loaded into the same process, and they pass C++ Standard Library objects to each other, those binaries must be built with the same major version and _ITERATOR_DEBUG_LEVEL settings (release/debug should match too, I forget if you can get away with mismatch here). Importantly, we cannot detect violations of this rule, so it's up to you to follow it.
Multiple binaries whose interfaces are purely C or COM (or now WinRT) may internally use different major versions, because those things guarantee binary compatibility. If your interfaces involve the C++ Core Language (e.g. stuff with virtuals) but are extremely careful to never mention any C++ Standard Library types, then you are probably okay - the compiler really tries to avoid breaking binary compatibility.
Note, however, that when multiple binaries loaded into a single process are compiled with different major versions, you'll almost certainly end up with multiple CRTs loaded into your process, which is undesirable.
Bottom line - if you compile everything 100% consistently, you just don't have to worry about any of this stuff. Don't play mixing games if you can possibly avoid it.

Finally, I found an answer to my question in Stephan T. Lavavej's blog post C++11/14 STL Features, Fixes, And Breaking Changes In VS 2013:
The VS Update mechanism is primarily for shipping high-priority bugfixes, not for shipping new features, especially massive rewrites with breaking changes (which are tied to equally massive compiler changes).
Major versions like Visual C++ 2013 give us the freedom to change and break lots of stuff. There's simply no way we can ship this stuff in an Update.
Q5: What about the bugfixes? Can we get those in an Update?
A5: This is an interesting question because the answer depends on my choices (whereas in the previous question, I wouldn't be allowed to ship such a rewrite in an Update even if I wanted to).
Each team gets to choose which bugfixes they take to "shiproom" for consideration to be included in an Update. There are things shiproom won't let us get away with (e.g. binary breaking changes are forbidden outside of major versions), but otherwise we're given latitude to decide things. I personally prioritize bandwidth over latency - that is, I prefer to ship a greater total number of bugfixes in every major version, instead of shipping a lesser total number of bugfixes (over the same period of time) more frequently in multiple Updates.

Related

Building C++ code with different version of Visual Studio produces different file size of .exe?

I can build my own .sln manually on my machine, or have Azure DevOps build it on a remote machine. The software is targeting .NET Core 3.1, and using C++17. I had noticed that building the same code, from the same branch, produced a different size .exe: the remote one had 9 KB less than the local one.
I finally got the same result when I upgraded the remote machine's version of Visual Studio 2019 to match mine (from 16.8.something up to 16.11.14). But how can this difference be explained? Is there something missing from the smaller file? It should have all the same methods, logic, and functionality. There were no errors, so no part of it could have failed to compile.
I also have to build Java projects with Maven and have heard that it can be built "slightly differently" depending on Maven versions. That made sense at first, but in hindsight I don't know exactly what that means.
Has anyone been really in the weeds with software builds (specifically with Visual Studio and its C++ compiler) that can explain this concept of "slightly different builds", or has a good idea?
Would both version be functionally identical, or is there no easy way to tell?
The C++ standard does not dictate the machine code that should be produced by the compiler. It just specifies the expected observable behavior.
So if for example you have a for loop the standard dictates the behavior (initializing, checking the condition etc.). But you can translate to machine code in various ways, e.g. using different registers, or executing the statements in different order (as long as the observable behavior is the same).
This principle is called the as-if rule.
So different compilers (or compiler versions) can produce different machine code. The cause might be either different optimizations, or different ways of translating C++ into machine code (as the mapping between C++ and machine code is not 1-1).
Examples related to optimizations:
If you have various statements in the code that are not dependent (e.g. you modify different unrelated variable), the compiler/optimizer might re-order them if memory access patern would be more efficient. Or the compiler/optimizer might eliminate statement that do not have observable behavior (like incrementing a variable that is never read afterwards). Another example is whether functions are inlined which is entirly up to the compiler/optimizer, and affect the binary code.
Therefore there's no guarentee for the size (or content) of a compiled binary file.

CMPXCHG16B and MSVC Implementation of <atomic> not the default?

It seems that for CMPXCHG16B to be used, one has to define _STD_ATOMIC_ALWAYS_USE_CMPXCHG16B = 1
so that these instructions are used.
Why is this the default? I would have never found out about this unless I read the whole atomic.h header either.
What other global defines in the STL are there? Is there a list to review so one can reliably be aware of these implementation details?
_STD_ATOMIC_ALWAYS_USE_CMPXCHG16B was recently introduced in Visual Studio 2019 (the PR)
Visual Studio 2019 still supports older OSes such as Windows Vista, and Windows 7. These OSes can run on old AMD Opteron CPUs that don't have this instruction.
Even if _STD_ATOMIC_ALWAYS_USE_CMPXCHG16B = 0, there's runtime detection that uses CMPXCHG16B if it is available. But in this case the instructions is not inlined, and there's also a branch, so it is less efficient than defining _STD_ATOMIC_ALWAYS_USE_CMPXCHG16B = 1.
Please also note that CMPXCHG16B is used for atomic_ref, but not for atomic due to ABI compatibility. (It was possible to introduce for atomic_ref, since there was no pre-C++20 atomic_ref to be ABI-compatible with).
In vNext version (the next major, ABI breaking version), atomic should use CMPXCHG16B as well. There's also hope that old CPUs/OSes support will be dropped, and the use of CMPXCHG16B would become unconditional. (See https://github.com/microsoft/STL/issues/1151).
I would have never found out about this unless I read the whole atomic.h header either.
What other global defines in the STL are there? Is there a list to review so one can reliably be aware of these implementation details?
I'm afraid there's no comprehensive list, although some are documented.
The excuse for _STD_ATOMIC_ALWAYS_USE_CMPXCHG16B in particular could be that whole atomic_ref is not documented, and as C++20 feature it has experimental status in Visual Studio 2019.

Binary size grows 30% when upgrading from Visual Studio 2008 to Visual Studio 2013

I've to maintain a big, old, code base (not written by me) with multiple projects, most of them in C++. One of my first steps was to upgrade the code base from VS 2008 to VS 2013.
Both solutions are set to optimize for size (in release build). And yet, binary size is now about 30% larger, almost in all binaries - which I've hard time to explain.
The projects uses ATL heavily, and I know that ATL 9 moved to static library, but I doubt this explain all the size differences.
Any idea for:
What is the explanation in the size difference. Is the VS12 more secure or has better performance due to this size change (looking for "key point" to sell this switch).
Looking for ways to reduce the binary size, starting with low hanging fruits to more elaborate work.
Under the assumption that you are linking the MFC statically:
Solution
Put
#define _AFX_NO_MFC_CONTROLS_IN_DIALOGS
at the top of your stdafx.h, or add _AFX_NO_MFC_CONTROLS_IN_DIALOGS to the preprocessor definitions in the project settings.
Explanation
MSVC 2010 contained a large number of new, extended controls (most of them related to ribbons, but also a CMFCButton and other things. There was also a feature pack for MSVC 2008). These new controls can be added to a dialog through the resource editor just like the old Windows controls.
In order to make this work, the code that parses your RC file1 needs to know all the new MFC control classes. This is not a problem if you link the MFC dynamically, but if you link them statically, it means that all the shiny new parts of the MFC are linked into your application whether you use them or not. I had a binary triple in size because of this.
Fairly quickly, this turned out to be a bigger problem than the people at Microsoft had imagined; linking the MFC statically is apparently more common than they expected. Working around the problem in MSVC 2010 remains painful, but with the next version, a mechanism was introduced to disable the new functionality: the _AFX_NO_MFC_CONTROLS_IN_DIALOGS preprocessor macro. If it is defined before any inclusion of MFC headers, the RC parser code does not handle the new controls, and a dependency to them is not introduced. Note that this means that the new controls cannot be added to dialogs through the resource editor.
A more detailed technical description of the problem and solution can be found in this MSDN blog post.
1Yes, I'm glossing over some detail here.

Major differences between Visual Studio 6.0 and VS 2010 Compilers

Some months ago I posted the following question
Problem with templates in VS 6.0
The ensuing discussion and your comments helped me to realize that getting my hands on a new compiler was mandatory - or basically they were the final spark which set me into motion. After one month of company-internal "lobbying" I am finally getting VS 2012 !! (thank you guys)
Several old tools which I have to use were developed with VS 6.0
My concerns are that some of these tools might not work with the new Compiler. This is why I was wondering whether somebody here could point out the major differences between VS 6 and VS 2012 - or at least the ones between VS 6 and VS 2010 - the changes from 2010 to 2012 are well documentes online.
Obviously the differences between VS 6.0 and VS 12 must be enormous ... I am mostly concerned with basic things like casts etc. There is hardly any information about VS 6.0 on the web - and I am somewhat at a loss :(
I think I will have to create new projects with the same classes. In the second step I would overwrite the .h and .cpp files with the ones of the old tools. Thus at least I will be able to open the files via the new compiler. Still some casts or Class definitions might not be supported and I would like to have a general idea of what to look for while debugging :)
The language has evolved significantly since VS 6.0 came out.
VS6.0 is pre-C++98; VS 2012 is C++03, with a few features from
C++11.
Most of the newer language features are upwards compatible;
older code should still work. Still, VC 6.0 is pre-standard,
and the committee was less concerned about breaking existing
code when there was no previous standard (and implementations
did vary). There are several aspects of the language (at least)
which might cause problems.
The first is that VC 6.0 still used the old scoping for
variables defined in a for. Thus, in VC 6.0, things like the following
were legal:
int findIndex( int* array, int size, int target )
{
for ( int i = 0; i < size && array[i] != target ; ++ i ) {
}
return i;
}
This will not compile in VC 2012 (unless there is also a global
variable i, in which case, it will return that, and not the
local one).
IIRC, too, VC 6.0 wasn't very strict in enforcing access
controls and const. This may not be problem when migrating,
however, because VC 2012 still fails to conform to C++98 in some
of the more flagrant cases, at least with the default options.
(You can still bind a temporary to a non-const reference, for
example.)
Another major language change which isn't backwards compatible
is name lookup in templates. Here too, however, even in VC
2012, Microsoft has implemented pre-standanrd name lookup (and
I mean pre-C++98). This is a serious problem if you want to
port your code to other compilers, but it does make migrating
from VC 6.0 to VC 2012 a lot easier.
With regards to the library, I can't remember whether 6.0
supported the C++98 library, or whether it was still
pre-standard (or possibly it supported both). If your code has
things like #include <iostream.h> in it, be prepared for some
differences here: minor for straight forward use of << and
>>; major if you implement some complicated streambuf. And
of course, all of the library was moved from global namespace to
std::.
For the rest: your code obviously won't use any of the features
introduced after VC 6.0 appeared. This won't cause migration
problems (since the older features are still supported), but
you'll doubtlessly want to go back, and gradually upgrade the
code once you've migrated. (You mentionned casts. This is
a good example: C style casts are still legal, with the same
semantics they've always had, but in new code, you'll want to
avoid them, and least when pointers or references are involved.)

Is it time to say goodbye to VC6 compiler?

Of late I'm facing the issues that points finger to VC6 compiler.
Few of them are:
A function-try-block doesn't work. Related Q
in-class constant doesn't work.
__FUNCTION_ (Macro to get function name) doesn't work
The latest addition is it doesn't allow void functions to be passed as part of for_each.
The below example is not compiling with VC6 compiler. It says "error C2562: '()' : 'void' function returning a value". It looks like VC6 doesn't like void functions to be passed to for_each.
class Temp
{
public:
Temp(int i):m_ii(i)
{}
int getI() const
{
return m_ii;
}
void printWithVoid()
{
cout<< "i = "<<m_ii<<endl;
}
bool printWithBool()
{
cout<< "i = "<<m_ii<<endl;
return true;
}
private:
int m_ii;
};
int main(void)
{
std::vector<Temp> arrTempObjects;
arrTempObjects.push_back(Temp(0));
arrTempObjects.push_back(Temp(2));
//Doesnot work, compiler error
std::for_each(arrTempObjects.begin(), arrTempObjects.end(), std::mem_fun_ref(&Temp::printWithVoid));
//Works
std::for_each(arrTempObjects.begin(), arrTempObjects.end(), std::mem_fun_ref(&Temp::printWithBool));
return 0;
}
Have you faced any other issues related to VC6.0. Any workaround to resolve these issues ? Or is it time to change the compiler?
Quite frankly I can hardly understand why you wouldn't buy a modern computer and switch to Visual Studio 2008.
VC6 has got a deficient STL, poor C++ standard compliance and obsolete GUI.
You shouldn't let your competitors use better tools than you.
Well, here's the thing. The VC6 compiler sucks. However... the IDE is pretty good.
VS2005 has much better source control support. Otherwise, it's much slower debugging, has a crappy output pane that exponentially decays on inserting output lines (what absolute garbage coding is that?), the help system is many times slower, and debug and continue (possibly Microsoft's best feature over other IDEs) is considerably more broken.
.NET? Sure, VS20xx is the only way to go. However, for one small client that is sticking with VC6/MFC (for interfaces to embedded systems, etc) I actually enjoy working with VC6. It's just FAST.
2008? I'd like to... but it takes a while for my clients to migrate. Nobody has, yet.
Is it time to say goodbye to VC6
compiler ?
Yes.
VC6 cannot do much of any kind of modern C++. I recall I tried to use one of the boost libraries ages ago like probably Graph and it was giving "INTERNAL COMPILER ERROR" all over the place so eventually I chucked that in.
The no-brainer answer is yes, and ASAP. You have free alternatives like VC++ express and Code::Blocks, if the cost as in issue. The pain in solving compatibility issues is IMO no reason not to upgrade because you will have to do it some day anyway and it only gets harder.
The only reason I see for a possible obstacle is if you have MFC code that will be difficult/time consuming to port. In that case you can't use VC++ express (no support for MFC) and you have to make the investment for at least the VS std. edition. That will cost you about EUR 300 (depending on where you live).
I changed from VC++ 6.0 to Code::Blocks (which is FOSS) with g++ a few months ago and haven't really looked back. I miss the VC++ debugger a bit, as the gdb implementation in CB is nowhere near as slick, but that's about all. Some things in the IDE work better (code completion, tooltips, dependancy xalculation) and the compiler is obviously much better.
Regarding your points, function try blocks are hardly a widely used feature, and most people think they are pretty useless. And the __FUNCTION__ macro is not part of the C++ Standard, so you shouldn't depend on it too much if portability is an issue.
No, it was time to say goodbye to it a decade ago. Here are a few reasons why:
There are free, standards-compliant compilers available, both from Microsoft and others
VC6 was written before the C++ language was standardized, and it is nowhere near standards compliant. Especially templates and the standard library live in a world of their own, with no tie to how these features actually work in ISO C++. The language it compiles is not C++. It is a hybrid of pre-standard C++, Microsoft extensions, compiler limitations and bugs. Neither of which are desirable.
VC6 is known to generate invalid code in some cases. Not only does it compile a home-made, buggy and nonstandard language, it also makes invalid optimizations causing crashes, or in some cases actually produces bad assembly that simply can not be executed.
It is broken, and it was always broken. It was designed to compile a language that ceased existing about the same time as the compiler was relased (when the language was standardized), and it failed even at that task, with countless bugs, some of which have been fixed in the half-dozen service packs that were released. But not all of them, and not even all the critical ones.
Of course, the downside to this is that your application is most likely just as broken. (not because you're bad programmers, but because it targets a broken compiler. It has to be broken to be accepted by VC6)
Porting that to a standards-compliant compiler is likely to be a lot of work. Don't assume that you can just import your old projects, click "build", and it'll work.
So if you're part of a big business that can't just take a month off to switch compilers, you might have to port it as a side project, while part of the team is maintaining the VC6 version. Don't scrap VC6 until you've successfully ported everything, and it works.
Unless you have a large program to maintain, yes. Switch today!
The Express versions of VC++ are a free download from Microsoft.
I guess this is why so many applications on Windows sucks because people still use VC6. Why mess with broke, never maintained MFC or even Win32 when their is wxWidgets and Qt4 out there way better than MFC could ever be and you you can even use the free additions of Visual Studio 2005+
You can learn to live with VC6s foibles. It almost has a certain retro charm these days. We've been repeatedly providing "just one last VC6 release" of some libraries to a customer for years now. Hard to argue with a customer prepared to pay for the extra work backporting and maintaining a branch. But at some point the cost for us to backport newer features developed in newer VCs will exceed the cost of them upgrading at their end (especially as more boost and Intel TBB creeps into the codebase's head). Or at least I hope that's what'll happen! Worst case it'll happen just as flaky C++0x support appears and we'll be stuck supporting that for them for 10 years...
General rule seems to be that a new version is an upgrade and is thus worthwhile.
However! you have to pick the right time for it, there are so many bugs fixed, but you then need to be aware of the new bugs and variations from the standard.
Set aside time for the upgrade.
Upgrading compiler versions could well be a project in its own right, make sure you have stable code and good tests before you do an upgrade and when you finish prove that it is still working the same.
You may be forced to upgrade when you start to develop for Vista as VC6 doesn't provide for code signing easily and the redist is not in a form that Vista likes. (want at least VC2K5)
Are you updating the OS any time soon? When I researched moving our apps to Vista, I found that Vista doesn't officially support anything before VS 2005 (except for VB 6), and has a page-long list of possible little problems with VS 2005 that may or may not bite you. I recommended delaying until VS 2008 SP1 was available (i.e., when VS 2008 was really usable), and doing the compiler changeover first.
If the project is a special one for a few customers who run it soley on old NT machines, you may want to keep it at VS 6. If you are selling it for any sort of general consumption, you will need to make it Vista-compatible at some point (or 7-compatible, or whatever), and you will need to upgrade.