Does Visual Studio 2010 perform zero-initialization? - c++

C++11 allows initializing a value with zero using the expression T(); (http://en.cppreference.com/w/cpp/language/zero_initialization). Is this feature supported by Visual Studio 2010? I ran some experiments comparing T x; with T x = T(); and I concluded that the latter case does initialize the value with zero, but I am not sure whether I can rely on that.
Is zero initialization mentioned anywhere in the VS2010 documentation? The VS2010 Initializers page (https://msdn.microsoft.com/en-us/library/w7wd1177(v=vs.100).aspx) does not mention it, unlike pages for later versions, e.g., VS2013 (https://msdn.microsoft.com/en-us/library/w7wd1177(v=vs.120).aspx).

This is a fundamental behaviour of the language that has been in there since the start. It was not introduced in C++11 (on that cppreference page note that "since C++11" is aligned with only the third of the examples under usage (2); granted it's not very clear).
If T were int and T() did not result in a temporary int of value zero (and this is zero-initialisation via value-initialisation), the compiler would have a very serious bug. I am sure that Visual Studio does not have this bug.
As for proof, the VS2010 docs do not seem to mention this behaviour in the same place the standard mentions it (i.e. under the explicit type conversion expression section). It is certainly possible that their documentation has changed/evolved/become more thorough over time, though, particularly as C++ itself added more and more ways to initialise things.

Related

What's the MSVC equivalent for -fno-char8_t?

In C++20 u8 string literals are based on the char8_t type. They deliberately do not convert to char const* any more:
const char* str = u8"Hall\u00f6chen \u2603"; // no longer valid in C++20
Of course, the ultimate goal when migrating to C++20 is to entirely go with the new behaviour (in the example above: change the type of str). However, because of 3rd party libraries, this is often not possible immediately.
The proposals that introduce and "remedy" char8_t anticipate that and mention that in clang and gcc there is the -fno-char8_t flag to switch back to the old behaviour (while still being able to enjoy other C++20 features).
The 2nd proposal sets up the expectation that Microsoft will follow and add a similar flag, but I was not able to find how to set it (at least in VS 2019, Version 16.4).
So does anyone know what the MSVC equivalent for -fno-char8_t is?
Since 16.1, there is the conformance compiler flag /Zc:char8_t-. The minus tells the compiler to not use conformance mode here when using C++20. On the contrary, /Zc:char8_t can be used to enable it.

What is the exact meaning of anachronism in coding(C++)?

I am using visual studio 2017. In C++, I tried to assign a pointer to 'this' pointer. It showed compiler error as "assignment to 'this' (anachronism)". Anachronism means adding something into the period it can't exist like roman emperor checks computer. So is the compiler warning also like this. Or is there any specific meaning in coding for the word "anachronism"?.
A good while ago, this pointer could be assigned values. I met such assignments in the code of the Cfront compiler. I wrote about it in this note: Celebrating the 30-th anniversary of the first C++ compiler: let's find the bugs in it. Examples:
expr.expr(TOK ba, Pexpr a, Pexpr b)
{
register Pexpr p;
if (this) goto ret;
....
this = p;
....
}
inline toknode.~toknode()
{
next = free_toks;
free_toks = this;
this = 0;
}
Anachronism is something that was OK long time ago but not anymore.
Starting from C++98, this is an rvalue and as such cannot be assigned (i.e. cannot appear on the left from an assignment operator).
See ยง9.3.2 The this pointer:
In the body of nonstatic (9.3) member function, the keyword this is a non-lvalue expression whose value is the address of the object for which the function is called.
Starting from C++11, this is specified as a prvalue.
From google:
a thing belonging or appropriate to a period other than that in which it exists, especially a thing that is conspicuously old-fashioned.
Which basically means that this used to be allowed in the past (probably by pre standards compliant compilers) but isn't allowed anymore.
This question tapped my curiosity, so I dug into my copies of all the ratified C++ standards and technical corrigenda I have (which works out to everything between C++98 and C++17 inclusive). None of them contain the word "anachronism" in any form.
The ARM (The C++ Annotated Reference Manual) by Ellis and Stroustrup, 1990 (a base document written to guide development of the C++ standard) has Section 18.3 entitled "Anachronisms". The first paragraph of that section says
The extensions provided here may be provided by an implementation to ease the use of C programs as C++ programs or to provide continuity from earlier C++ implementations. Note that each of these features has undesirable aspects. An implementation providing them should also provide a way for the user to ensure that they do not occur in a source file. A C++ implementation is not obliged to provide these features.

Type conversion when assigning a double to an int array

I'm learning C++ and have run into something I'd like to know more about.
Say I try to declare and initialize an array as:
int myarray[] = {1, 2, 3, 4.1};
Note the non-integer 4.1 at index 3. In Visual Studio, this will compile with a warning, but in gcc it will fail to compile. Does the standard (for 11, 14, or 17) have anything to say about automatic conversion when assigning the wrong type to an element of an array, or is it left to the compiler to decide what happens (or something else)? I'd like to find out why the results are different.
Pre C++11 the code was legal. Since C++11 that is considered a narrow conversion (from double to int there is information loss) and is illegal.
What compiler and what version of compiler you use matters. Visual Studio, especially slightly older versions don't actually implement any standard in full. They implements bits of C++11, bits of C++14, bits of C++17. With the newest versions that could have possibly changed, I haven't followed it's evolution lately.
Really older versions of gcc default to gnu++98 which is C++98 with gnu extensions, while newer version default to gnu++11 and gnu++14. For your curiosity if you want to see it working in gcc use -std=c++98 and you will only get a warning.
In hindsight we have learned that allowing implicit conversions that can lose information (e.g. from floating point to integer, from long long to int) was not a good ideea. With C++11 this was taken into consideration. Making all such implicit conversions illegal was deemed to much of a breaking change. The introduction of list initialization offered a middle ground: disallow narrowing conversions only in a list initialization and recommend using list initialization as de facto way of initializing an object. The downside was that it was a breaking change for array initialization, but the benefits were weighted to be of more value.
int a = 2.4; // still allowed, but not recommended
int a(2.4); // still allowed, but not recommended
int a = {2.4}; // New way to initialize. Narrowing conversion, illegal
int a{2.4}; // New and recommended way to initialize. Narrowing conversion, illegal
You can read more about narrow conversions here: https://en.cppreference.com/w/cpp/language/list_initialization

Why can't constexpr just be the default?

constexpr permits expressions which can be evaluated at compile time to be ... evaluated at compile time.
Why is this keyword even necessary? Why not permit or require that compilers evaluate all expressions at compile time if possible?
The standard library has an uneven application of constexpr which causes a lot of inconvenience. Making constexpr the "default" would address that and likely improve a huge amount of existing code.
It already is permitted to evaluate side-effect-free computations at compile time, under the as-if rule.
What constexpr does is provide guarantees on what data-flow analysis a compliant compiler is required to do to detect1 compile-time-computable expressions, and also allow the programmer to express that intent so that they get a diagnostic if they accidentally do something that cannot be precomputed.
Making constexpr the default would eliminate that very useful diagnostic ability.
1 In general, requiring "evaluate all expressions at compile time if possible" is a non-starter, because detecting the "if possible" requires solving the Halting Problem, and computer scientists know that this is not possible in the general case. So instead a relaxation is used where the outputs are { "Computable at compile-time", "Not computable at compile-time or couldn't decide" }. And the ability of different compilers to decide would depend on how smart their test was, which would make this feature non-portable. constexpr defines the exact test to use. A smarter compiler can still pre-compute even more expressions than the Standard test dictates, but if they fail the test, they can't be marked constexpr.
Note: despite the below, I admit to liking the idea of making constexpr the default. But you asked why it wasn't already done, so to answer that I will simply elaborate on mattnewport's last comment:
Consider the situation today. You're trying to use some function from the standard library in a context that requires a constant expression. It's not marked as constexpr, so you get a compiler error. This seems dumb, since "clearly" the ONLY thing that needs to change for this to work is to add the word constexpr to the definition.
Now consider life in the alternate universe where we adopt your proposal. Your code now compiles, yay! Next year you decide you to add Windows support to whatever project you're working on. How hard can it be? You'll compile using Visual Studio for your Windows users and keep using gcc for everyone else, right?
But the first time you try to compile on Windows, you get a bunch of compiler errors: this function can't be used in a constant expression context. You look at the code of the function in question, and compare it to the version that ships with gcc. It turns out that they are slightly different, and that the version that ships with gcc meets the technical requirements for constexpr by sheer accident, and likewise the one that ships with Visual Studio does not meet those requirements, again by sheer accident. Now what?
No problem you say, I'll submit a bug report to Microsoft: this function should be fixed. They close your bug report: the standard never says this function must be usable in a constant expression, so we can implement however we want. So you submit a bug report to the gcc maintainers: why didn't you warn me I was using non-portable code? And they close it too: how were we supposed to know it's not portable? We can't keep track of how everyone else implements the standard library.
Now what? No one did anything really wrong. Not you, not the gcc folks, nor the Visual Studio folks. Yet you still end up with un-portable code and are not a happy camper at this point. All else being equal, a good language standard will try to make this situation as unlikely as possible.
And even though I used an example of different compilers, it could just as well happen when you try to upgrade to a newer version of the same compiler, or even try to compile with different settings. For example: the function contains an assert statement to ensure it's being called with valid arguments. If you compile with assertions disabled, the assertion "disappears" and the function meets the rules for constexpr; if you enable assertions, then it doesn't meet them. (This is less likely these days now that the rules for constexpr are very generous, but was a bigger issue under the C++11 rules. But in principle the point remains even today.)
Lastly we get to the admittedly minor issue of error messages. In today's world, if I try to do something like stick in a cout statement in constexpr function, I get a nice simple error right away. In your world, we would have the same situation that we have with templates, deep stack-traces all the way to the very bottom of the implementation of output streams. Not fatal, but surely annoying.
This is a year and a half late, but I still hope it helps.
As Ben Voigt points out, compilers are already allowed to evaluate anything at compile time under the as-if rule.
What constexpr also does is lay out clear rules for expressions that can be used in places where a compile time constant is required. That means I can write code like this and know it will be portable:
constexpr int square(int x) { return x * x; }
...
int a[square(4)] = {};
...
Without the keyword and clear rules in the standard I'm not sure how you could specify this portably and provide useful diagnostics on things the programmer intended to be constexpr but don't meet the requirements.

Compiler chosing prefix ++ when postfix is missing - who says?

When you define a prefix operator++ for your user defined type and you don't provide a postfix version, the compiler (in Visual C++ at least) will use the PREFIX version when your code calls the missing POSTFIX version.
At least it will give you a warning. But, my question is: Why doesn't it just give you an error for the undefined member function?
I have seen this first hand, and have seen it mentioned in another post and elsewhere, but I cannot find this in the actual C++ standard. My second and third questions are... Is it in the standard somewhere? Is this a Microsoft-specific handing of the situation?
Actually, In this case the MSVC behaves much more intelligently than GCC.
This is a MSVC compiler extension and the C++ standard explicitly allows for such an behavior.
C++ Standard:
Section 1.4/8:
A conforming implementation may have extensions (including additional library functions), provided they do not alter the behavior of any well-formed program. Implementations are required to diagnose programs that use such extensions that are ill-formed according to this International Standard. Having done so, however, they can compile and execute such programs.
In this case, MSVC appropriately diagnoses the problem that the postfix is not available and it specifically defines warnings,
Compiler Warning (level 1) C4620
Compiler Warning (level 1) C4621
Also, it provides you a facility to disable the MSVC specific extensions by using /Za. Overall, I would say this is one of the instances where MSVC actually behaves better than GCC.
It should be a Microsoft specific extension. Because, at least g++ is strict about prefix and postfix operators. Here is the demo link.
With integers, ++i is different from i++. For exampe i=5, y = ++i - y will be 6; y=i++ - y will be 5. Other types should function in the same manner. Therefore the behaviour differs between postfix and prefix.
It is a Microsoft thing and in my opinion the compilers implementation is incorrect.