Difference between class_name object_name() , class object_name = class_name() and class_name object [duplicate] - c++

On Wikipedia I found this:
A a( A() );
[This] could be disambiguated either as
a variable definition of class [A], taking an anonymous instance of class [A] or
a function declaration for a function which returns an object of type [A] and takes a single (unnamed) argument which is a function returning type [A] (and taking no input).
Most programmers expect the first, but the C++ standard requires it to be interpreted as the second.
But why? If the majority of the C++ community expects the former behavior, why not make it the standard? Besides, the above syntax is consistent if you don't take into account the parsing ambiguity.
Can someone please enlighten me? Why does the standard make this a requirement?

Let's say MVP didn't exist.
How would you declare a function?
A foo();
would be a variable definition, not a method declaration. Would you introduce a new keyword? Would you have a more awkward syntax for a function declaration? Or would you rather have
A foo;
define a variable and
A foo();
declare a function?
Your slightly more complicated example is just for consistency with this basic one. It's easier to say "everything that can be interpreted as a declaration, will be interpreted as a declaration" rather than "everything that can be interpreted as a declaration, will be interpreted as a declaration, unless it's a single variable definition, in which case it's a variable definition".
This probably isn't the motivation behind it though, but a reason it's a good thing.

For C++, it's pretty simple: because the rule was made that way in C.
In C, the ambiguity only arises with a typedef and some fairly obscure code. Almost nobody ever triggers it by accident -- in fact, it probably qualifies as rare except in code designed specifically to demonstrate the possibility. For better or worse, however, the mere possibility of the ambiguity meant somebody had to resolve it -- and if memory serves, it was resolved by none other than Dennis Ritchie, who decreed that anything that could be interpreted as a declaration would be a declaration, even if there was also an ambiguous interpretation as a definition.
C++ added the ability to use parentheses for initialization as well as function calls as grouping, and this moved the ambiguity from obscure to common. Changing it, however, would have required breaking the rule as it came from C. Resolving this particular ambiguity as most would expect, without creating half a dozen more that were even more surprising would probably have been fairly non-trivial as well, unless you were willing to throw away compatibility with C entirely.

This is just a guess, but it may be due to the fact that with the given approach you can get both behaviors:
A a( A() ); // this is a function declaration
A a( (A()) ); // this is a variable definition
If you were to change its behavior to be a variable definition, then function declarations would be considerably more complex.
typedef A subfunction_type();
A a( A() ); // this would be a variable declaration
A a( subfunction_type ); // this would be a function declaration??

It's a side-effect of the grammar being defined recursively.
It was not designed intentionally like that. It was discovered and documented as the most vexing parse.

Consider if the program were like so:
typedef struct A { int m; } A;
int main() { A a( A() ); }
This would be valid C, and there is only one possible interpretation allowed by the grammar of C: a is declared as a function. C only allows initialization using = (not parentheses), and does not allow A() to be interpreted as an expression. (Function-style casts are a C++-only feature.) This is not a "vexing parse" in C.
The grammar of C++ makes this example ambiguous, as Wikipedia points out. However, if you want C++ to give this program the same meaning as C, then, obviously, C++ compilers are going to have to interpret a as a function just like C compilers. Sure, C++ could have changed the meaning of this program, making a the definition of a variable of type A. However, incompatibilities with C were introduced into C++ only when there was a good reason to do it, and I would imagine that Stroustrup particularly wanted to avoid potentially silent breakages such as this, as they would cause great frustration for C users migrating to C++.
Thus, C++ interprets it as a function declaration too, and not a variable definition; and more generally, adopted the rule that if something that looks like a function-style cast can be interpreted as a declaration instead in its syntactic context, then it shall be. This eliminates potential for incompatibility with C for all vexing-parse situations, by ensuring that the interpretation that is not available in C (i.e. the one involving a function-style cast) is not taken.
Cfront 2.0 Selected Readings (page 1-42) mentions the C compatibility issue in the case of expression-declaration ambiguity, which is a related type of most vexing parse.

No particular reason, other than [possibly] the case that K-ballo identifies.
It's just legacy. There was already the int x; construction form so it never seemed like a reach to require T x; when no ctor args are in play.
In hindsight I'd imagine that if the language were designed from scratch today, then the MVP wouldn't exist... along with a ton of other C++ oddities.
Recall that C++ evolved over decades and, even now, is designed only by committee (see also: camel).

Related

C++ Default and Copy Constructor [duplicate]

On Wikipedia I found this:
A a( A() );
[This] could be disambiguated either as
a variable definition of class [A], taking an anonymous instance of class [A] or
a function declaration for a function which returns an object of type [A] and takes a single (unnamed) argument which is a function returning type [A] (and taking no input).
Most programmers expect the first, but the C++ standard requires it to be interpreted as the second.
But why? If the majority of the C++ community expects the former behavior, why not make it the standard? Besides, the above syntax is consistent if you don't take into account the parsing ambiguity.
Can someone please enlighten me? Why does the standard make this a requirement?
Let's say MVP didn't exist.
How would you declare a function?
A foo();
would be a variable definition, not a method declaration. Would you introduce a new keyword? Would you have a more awkward syntax for a function declaration? Or would you rather have
A foo;
define a variable and
A foo();
declare a function?
Your slightly more complicated example is just for consistency with this basic one. It's easier to say "everything that can be interpreted as a declaration, will be interpreted as a declaration" rather than "everything that can be interpreted as a declaration, will be interpreted as a declaration, unless it's a single variable definition, in which case it's a variable definition".
This probably isn't the motivation behind it though, but a reason it's a good thing.
For C++, it's pretty simple: because the rule was made that way in C.
In C, the ambiguity only arises with a typedef and some fairly obscure code. Almost nobody ever triggers it by accident -- in fact, it probably qualifies as rare except in code designed specifically to demonstrate the possibility. For better or worse, however, the mere possibility of the ambiguity meant somebody had to resolve it -- and if memory serves, it was resolved by none other than Dennis Ritchie, who decreed that anything that could be interpreted as a declaration would be a declaration, even if there was also an ambiguous interpretation as a definition.
C++ added the ability to use parentheses for initialization as well as function calls as grouping, and this moved the ambiguity from obscure to common. Changing it, however, would have required breaking the rule as it came from C. Resolving this particular ambiguity as most would expect, without creating half a dozen more that were even more surprising would probably have been fairly non-trivial as well, unless you were willing to throw away compatibility with C entirely.
This is just a guess, but it may be due to the fact that with the given approach you can get both behaviors:
A a( A() ); // this is a function declaration
A a( (A()) ); // this is a variable definition
If you were to change its behavior to be a variable definition, then function declarations would be considerably more complex.
typedef A subfunction_type();
A a( A() ); // this would be a variable declaration
A a( subfunction_type ); // this would be a function declaration??
It's a side-effect of the grammar being defined recursively.
It was not designed intentionally like that. It was discovered and documented as the most vexing parse.
Consider if the program were like so:
typedef struct A { int m; } A;
int main() { A a( A() ); }
This would be valid C, and there is only one possible interpretation allowed by the grammar of C: a is declared as a function. C only allows initialization using = (not parentheses), and does not allow A() to be interpreted as an expression. (Function-style casts are a C++-only feature.) This is not a "vexing parse" in C.
The grammar of C++ makes this example ambiguous, as Wikipedia points out. However, if you want C++ to give this program the same meaning as C, then, obviously, C++ compilers are going to have to interpret a as a function just like C compilers. Sure, C++ could have changed the meaning of this program, making a the definition of a variable of type A. However, incompatibilities with C were introduced into C++ only when there was a good reason to do it, and I would imagine that Stroustrup particularly wanted to avoid potentially silent breakages such as this, as they would cause great frustration for C users migrating to C++.
Thus, C++ interprets it as a function declaration too, and not a variable definition; and more generally, adopted the rule that if something that looks like a function-style cast can be interpreted as a declaration instead in its syntactic context, then it shall be. This eliminates potential for incompatibility with C for all vexing-parse situations, by ensuring that the interpretation that is not available in C (i.e. the one involving a function-style cast) is not taken.
Cfront 2.0 Selected Readings (page 1-42) mentions the C compatibility issue in the case of expression-declaration ambiguity, which is a related type of most vexing parse.
No particular reason, other than [possibly] the case that K-ballo identifies.
It's just legacy. There was already the int x; construction form so it never seemed like a reach to require T x; when no ctor args are in play.
In hindsight I'd imagine that if the language were designed from scratch today, then the MVP wouldn't exist... along with a ton of other C++ oddities.
Recall that C++ evolved over decades and, even now, is designed only by committee (see also: camel).

Anonymous callable object is not executed by a std::thread [duplicate]

On Wikipedia I found this:
A a( A() );
[This] could be disambiguated either as
a variable definition of class [A], taking an anonymous instance of class [A] or
a function declaration for a function which returns an object of type [A] and takes a single (unnamed) argument which is a function returning type [A] (and taking no input).
Most programmers expect the first, but the C++ standard requires it to be interpreted as the second.
But why? If the majority of the C++ community expects the former behavior, why not make it the standard? Besides, the above syntax is consistent if you don't take into account the parsing ambiguity.
Can someone please enlighten me? Why does the standard make this a requirement?
Let's say MVP didn't exist.
How would you declare a function?
A foo();
would be a variable definition, not a method declaration. Would you introduce a new keyword? Would you have a more awkward syntax for a function declaration? Or would you rather have
A foo;
define a variable and
A foo();
declare a function?
Your slightly more complicated example is just for consistency with this basic one. It's easier to say "everything that can be interpreted as a declaration, will be interpreted as a declaration" rather than "everything that can be interpreted as a declaration, will be interpreted as a declaration, unless it's a single variable definition, in which case it's a variable definition".
This probably isn't the motivation behind it though, but a reason it's a good thing.
For C++, it's pretty simple: because the rule was made that way in C.
In C, the ambiguity only arises with a typedef and some fairly obscure code. Almost nobody ever triggers it by accident -- in fact, it probably qualifies as rare except in code designed specifically to demonstrate the possibility. For better or worse, however, the mere possibility of the ambiguity meant somebody had to resolve it -- and if memory serves, it was resolved by none other than Dennis Ritchie, who decreed that anything that could be interpreted as a declaration would be a declaration, even if there was also an ambiguous interpretation as a definition.
C++ added the ability to use parentheses for initialization as well as function calls as grouping, and this moved the ambiguity from obscure to common. Changing it, however, would have required breaking the rule as it came from C. Resolving this particular ambiguity as most would expect, without creating half a dozen more that were even more surprising would probably have been fairly non-trivial as well, unless you were willing to throw away compatibility with C entirely.
This is just a guess, but it may be due to the fact that with the given approach you can get both behaviors:
A a( A() ); // this is a function declaration
A a( (A()) ); // this is a variable definition
If you were to change its behavior to be a variable definition, then function declarations would be considerably more complex.
typedef A subfunction_type();
A a( A() ); // this would be a variable declaration
A a( subfunction_type ); // this would be a function declaration??
It's a side-effect of the grammar being defined recursively.
It was not designed intentionally like that. It was discovered and documented as the most vexing parse.
Consider if the program were like so:
typedef struct A { int m; } A;
int main() { A a( A() ); }
This would be valid C, and there is only one possible interpretation allowed by the grammar of C: a is declared as a function. C only allows initialization using = (not parentheses), and does not allow A() to be interpreted as an expression. (Function-style casts are a C++-only feature.) This is not a "vexing parse" in C.
The grammar of C++ makes this example ambiguous, as Wikipedia points out. However, if you want C++ to give this program the same meaning as C, then, obviously, C++ compilers are going to have to interpret a as a function just like C compilers. Sure, C++ could have changed the meaning of this program, making a the definition of a variable of type A. However, incompatibilities with C were introduced into C++ only when there was a good reason to do it, and I would imagine that Stroustrup particularly wanted to avoid potentially silent breakages such as this, as they would cause great frustration for C users migrating to C++.
Thus, C++ interprets it as a function declaration too, and not a variable definition; and more generally, adopted the rule that if something that looks like a function-style cast can be interpreted as a declaration instead in its syntactic context, then it shall be. This eliminates potential for incompatibility with C for all vexing-parse situations, by ensuring that the interpretation that is not available in C (i.e. the one involving a function-style cast) is not taken.
Cfront 2.0 Selected Readings (page 1-42) mentions the C compatibility issue in the case of expression-declaration ambiguity, which is a related type of most vexing parse.
No particular reason, other than [possibly] the case that K-ballo identifies.
It's just legacy. There was already the int x; construction form so it never seemed like a reach to require T x; when no ctor args are in play.
In hindsight I'd imagine that if the language were designed from scratch today, then the MVP wouldn't exist... along with a ton of other C++ oddities.
Recall that C++ evolved over decades and, even now, is designed only by committee (see also: camel).

Why is C++'s void type only half-heartedly a unit type?

C++'s void type is not uninhabited. The problem is that while it has precisely one inhabitant, very much like the Unit type (a.k.a. ()) in ML-like languages, that inhabitant cannot be named or passed around as an ordinary value. For example, the following code fails to compile:
void foo(void a) { return; }
void bar() { foo(foo()); }
whereas equivalent (say) Rust code would compile just fine:
fn foo(a : ()) { return; }
fn bar() { foo(foo(())); }
In effect, void is like a unit type, but only half-heartedly so. Why is this the case?
Does the C++ standard explicitly state that one cannot create values of type void? If yes, what is the rationale behind this decision? If not, why does the code above not compile?
If it is some backwards-compatibility related reason, please give a code example.
To be clear, I'm not looking for work-arounds to the problem (e.g. using an empty struct/class). I want to know the historical reason(s) behind the status quo.
EDIT: I've changed the syntax in the code examples slightly to make it clear that I'm not trying to hijack existing syntax like void foo(void) (consequently, some comments may be out of date). The primary motivation behind the question is "why is the type system not like X" and not "why does this bit of syntax not behave as I'd like it to". Please keep this point in mind if you're writing an answer talking about breaking backwards compatibility.
"Does the C++ standard explicitly state that one cannot create values of type void?"
Yes. It states that void is an incomplete type and cannot be completed. You can't create objects or values with an incomplete type.
This is an old rule; as the comments note it's inherited from C. There are minor extensions in C++ to simplify the writing of generic code, e.g. void f(); void g() { return f(); } is legal.
There seems to be little gain in changing the status quo. C++ is not an academic language. Purity is not a goal. Writing useful program is, but how does such a proposal help with that? To quote Raymond Chen, every proposal starts at -100 and has to justify its addition; you don't justify the lack of features.
That is really an historical question. Old (pre-C) language used to differentiate functions which returned values, from subroutines which did not (ooh, the good old taste of Fortran IV and Basic...). AFAIK, early C only allowed functions, simply functions were by default returning int and it was legal to have no return statement (mean return an unspecified value) and legal to ignore any return value - so that the programmer can write coherent code... In those early days, C was used more or less as a powerful macro assembler, and anything was allowed provided the compiler can translate it into machine instructions (no strict aliasing rule for example...). As the memory unit was char, no need for void * pointer, char * was enough.
Then people felt the need to make clear that a buffer was expected to contain anything and not a character string, and that some functions will never return a value. And void came to feel the gap.
The drawback, is that when you declare a void function, you declare what was called a subroutine, that is something that can never be used as a value, in particular never be used as a function parameter. So void is not only a special type that can never be instantiated, it really declare that the result cannot be a member of an expression.
And because of language inheritance, and because the C standard library is still a subset of the C++ standard one, C++ still processes void the way ANSI C did.
Other languages can use different conventions. In Python for example a function will always return something, simply it returns the special None value if no return statement is encountered. And rust seem to have still another convention.

What is the purpose of the Most Vexing Parse?

On Wikipedia I found this:
A a( A() );
[This] could be disambiguated either as
a variable definition of class [A], taking an anonymous instance of class [A] or
a function declaration for a function which returns an object of type [A] and takes a single (unnamed) argument which is a function returning type [A] (and taking no input).
Most programmers expect the first, but the C++ standard requires it to be interpreted as the second.
But why? If the majority of the C++ community expects the former behavior, why not make it the standard? Besides, the above syntax is consistent if you don't take into account the parsing ambiguity.
Can someone please enlighten me? Why does the standard make this a requirement?
Let's say MVP didn't exist.
How would you declare a function?
A foo();
would be a variable definition, not a method declaration. Would you introduce a new keyword? Would you have a more awkward syntax for a function declaration? Or would you rather have
A foo;
define a variable and
A foo();
declare a function?
Your slightly more complicated example is just for consistency with this basic one. It's easier to say "everything that can be interpreted as a declaration, will be interpreted as a declaration" rather than "everything that can be interpreted as a declaration, will be interpreted as a declaration, unless it's a single variable definition, in which case it's a variable definition".
This probably isn't the motivation behind it though, but a reason it's a good thing.
For C++, it's pretty simple: because the rule was made that way in C.
In C, the ambiguity only arises with a typedef and some fairly obscure code. Almost nobody ever triggers it by accident -- in fact, it probably qualifies as rare except in code designed specifically to demonstrate the possibility. For better or worse, however, the mere possibility of the ambiguity meant somebody had to resolve it -- and if memory serves, it was resolved by none other than Dennis Ritchie, who decreed that anything that could be interpreted as a declaration would be a declaration, even if there was also an ambiguous interpretation as a definition.
C++ added the ability to use parentheses for initialization as well as function calls as grouping, and this moved the ambiguity from obscure to common. Changing it, however, would have required breaking the rule as it came from C. Resolving this particular ambiguity as most would expect, without creating half a dozen more that were even more surprising would probably have been fairly non-trivial as well, unless you were willing to throw away compatibility with C entirely.
This is just a guess, but it may be due to the fact that with the given approach you can get both behaviors:
A a( A() ); // this is a function declaration
A a( (A()) ); // this is a variable definition
If you were to change its behavior to be a variable definition, then function declarations would be considerably more complex.
typedef A subfunction_type();
A a( A() ); // this would be a variable declaration
A a( subfunction_type ); // this would be a function declaration??
It's a side-effect of the grammar being defined recursively.
It was not designed intentionally like that. It was discovered and documented as the most vexing parse.
Consider if the program were like so:
typedef struct A { int m; } A;
int main() { A a( A() ); }
This would be valid C, and there is only one possible interpretation allowed by the grammar of C: a is declared as a function. C only allows initialization using = (not parentheses), and does not allow A() to be interpreted as an expression. (Function-style casts are a C++-only feature.) This is not a "vexing parse" in C.
The grammar of C++ makes this example ambiguous, as Wikipedia points out. However, if you want C++ to give this program the same meaning as C, then, obviously, C++ compilers are going to have to interpret a as a function just like C compilers. Sure, C++ could have changed the meaning of this program, making a the definition of a variable of type A. However, incompatibilities with C were introduced into C++ only when there was a good reason to do it, and I would imagine that Stroustrup particularly wanted to avoid potentially silent breakages such as this, as they would cause great frustration for C users migrating to C++.
Thus, C++ interprets it as a function declaration too, and not a variable definition; and more generally, adopted the rule that if something that looks like a function-style cast can be interpreted as a declaration instead in its syntactic context, then it shall be. This eliminates potential for incompatibility with C for all vexing-parse situations, by ensuring that the interpretation that is not available in C (i.e. the one involving a function-style cast) is not taken.
Cfront 2.0 Selected Readings (page 1-42) mentions the C compatibility issue in the case of expression-declaration ambiguity, which is a related type of most vexing parse.
No particular reason, other than [possibly] the case that K-ballo identifies.
It's just legacy. There was already the int x; construction form so it never seemed like a reach to require T x; when no ctor args are in play.
In hindsight I'd imagine that if the language were designed from scratch today, then the MVP wouldn't exist... along with a ton of other C++ oddities.
Recall that C++ evolved over decades and, even now, is designed only by committee (see also: camel).

Pure functions in C++11

Can one in C++11 somehow in gcc mark a function (not a class method) as const telling that it is pure and does not use the global memory but only its arguments?
I've tried gcc's __attribute__((const)) and it is precisely what I want. But it does not produce any compile time error when the global memory is touched in the function.
Edit 1
Please be careful. I mean pure functions. Not constant functions. GCC's attribute is a little bit confusing. Pure functions only use their arguments.
Are you looking for constexpr? This tells the compiler that the function may be evaluated at compile time. A constexpr function must have literal return and parameter types and the body can only contain static asserts, typedefs, using declarations and directives and one return statement. A constexpr function may be called in a constant expression.
constexpr int add(int a, int b) { return a + b; }
int x[add(3, 6)];
Having looked at the meaning of __atribute__((const)), the answer is no, you cannot do this with standard C++. Using constexpr will achieve the same effect, but only on a much more limited set of functions. There is nothing stopping a compiler from making these optimizations on its own, however, as long as the compiled program behaves the same way (the as-if rule).
Because it has been mentioned a lot here, lets forget about Meta programming for now, which is pure functional anyway and off topic. However, a constexpr function foo can be called with non constexpr arguments and in this context foo is actually a pure function evaluated at runtime (I am ignoring global variables here). But you can write many pure functions that you cannot make constexpr, this includes any function throwing exceptions for example.
Second I assume the OP means marking pure as an assertion for the compiler to check. GCC's pure attribute is the opposite, a way for the coder to help the compiler.
While the answer to the OP's question is NO, it is very interesting to read about the history of attempts to introduce a pure keyword (or impure and let pure be the default).
The d-lang community quickly figured out that the meaning of "pure" is not clear. Logging should not make a function impure. Mutable variables that do not escape the function call should be allowed in pure functions. Equal return values having different addresses should not be considered impure. But D goes even further than that in stretching purity.
So the d-lang community introduced the term "weakly pure" and "strongly pure". But later disputes showed that weak and strong is not black and white and there are grey zones. see purity in D
Rust introduced the "pure" keyword early on; and they dropped it because of its complexity. see purity in Rust.
Among the great benefits of a "pure" keyword there is an ugly consequence though. A templated function can be pure or not depending on its type parameters. This can explode the number of template instantiations. Those instantiations may only need to exist temporarily in the compiler and not get into the executable but they can still explode compile times.
A syntax highlighting editor could be of some help here without modifying the language. Optimizing C++ compilers do actually reason about the pureness of a function, they just do not guarantee catching all cases.
I find it sad that this feature seems to have low priority. It makes reasoning about code so much easier. I would even argue that it would improve software design by the way it incentivizing programmers to think differently.
using just standard C++11:
namespace g{ int x; }
constexpr int foo()
{
//return g::x = 42; Nah, not constant
return 42; // OK
}
int main()
{}
here's another example:
constexpr int foo( int blah = 0 )
{
return blah + 42; // OK
}
int main( int argc, char** )
{
int bah[foo(2)]; // Very constant.
int const troll = foo( argc ); // Very non-constant.
}
The meaning of GCC's __attribute__( const ) is documented in the GNU compiler docs as …
Many functions do not examine any values except their arguments, and have no effects except the return value. Basically this is just slightly more strict class than the pure attribute below, since function is not allowed to read global memory.
One may take that to mean that the function result should only depend on the arguments, and that the function should have no side effects.
This allows a more general class of functions than C++11 constexpr, which makes the function inline, restricts arguments and function result to literal types, and restricts the "active" statements of the function body to a single return statement, where (C++11 §7.1.5/3)
— every constructor call and implicit conversion used in initializing the return value (6.6.3, 8.5) shall be one of those allowed in a constant expression (5.19)
As an example, it is difficult (I would think not impossible, but difficult) to make a constexpr sin function.
But the purity of the result matters only to two parties:
When known to be pure, the compiler can elide calls with known results.
This is mostly an optimization of macro-generated code. Replace macros with inline functions to avoid silly generation of identical sub-expressions.
When known to be pure, a programmer can remove a call entirely.
This is just a matter of proper documentation. :-)
So instead of looking for a way to express the purity of e.g. sin in the language, I suggest just avoid code generation via macros, and document pure functions as such.
And use constexpr for the functions where it's practically possible (unfortunately, as of Dec. 2012 the latest Visual C++ compiler doesn't yet support constexpr).
There is a previous SO question about the relationship between pure and constexpr. Mainly, every constexpr function is pure, but not vice versa.