Why can't templates take function local types? - c++

In C++ it's OK to have a funcction that takes a function local type:
int main() {
struct S { static void M(const S& s) { } };
S s;
S::M(s);
}
but not OK to have a template that does:
template<typename T> void Foo(const T& t) { }
int main() {
struct S { } s;
Foo(s); // Line 5: error: no matching function for call to 'Foo(main()::S&)'
}
14.3.1 paragraph 2 in the c++ standard.
A type with no linkage [...] shall not be used as a template-argument for a template type-parameter
Why does C++ disallow that?
The best explanation I've heard so far it that inner types have no linkage and that this could imply that a function that takes them as an arg must have no linkage. But there is no reason I can see that a template instantiation must have linkage.
p.s. Please don't just say "thats not allowed because the standard says it's not"

I believe the difficulty that was foreseen was with two instantiations of Foo<T> actually meaning entirely different things, because T wasn't the same for both. Quite a few early implementations of templates (including cfront's) used a repository of template instantiations, so the compiler could automatically instantiate a template over a required type when/if it was found that an instantiation over that type wasn't already in the repository.
To make that work with local types, the repository wouldn't just be able to store the type over which the template was instantiated, but instead it would have to do something like creating a complete "path" to the type for the instantiation. While that's probably possible, I think it was seen as a lot of extra work for little (if any) real benefit.
Since then, the rules have changed enough that the compiler is already required to do something that's just about equivalent, finding (and coalescing) instantiations over the same type at different places (including across TUs) so that two instantiations of foo<int> (for example) don't violate the ODR. Based on that realization, the restriction has been loosened in (the current draft of) C++0x (you still can't instantiate a template class over a local type, but you can use a local type as parameter to a template function).

I'm guessing it is because it would require the template to be effectively instantiated within the scope of the function, since that is where such types are visible. However, at the same time, template instantiations are supposed to act as if they are in the scope in which the template is defined. I'm sure this it's possible to deal with that somehow, but if I'm right the standards body decided not to put that burden on compiler writers.
A similar decision was the reason vector<vector<int>> is invalid syntax per the standard; detecting that construction requires some interaction between compiler lexer and parser phases. However, that's changing, because the C++0x standards folk found that all the compilers are detecting it anyway to emit sane error messages.
I suspect that if it were to be demonstrated that allowing this construction was trivial to implement, and that it didn't introduce any ambiguities in the language scoping rules, you might someday see the standard changed here too.

Related

Is constexpr really needed in c++ in general? [duplicate]

C++11 allows functions declared with the constexpr specifier to be used in constant expressions such as template arguments. There are stringent requirements about what is allowed to be constexpr; essentially such a function encapsulates only one subexpression and nothing else. (Edit: this is relaxed in C++14 but the question stands.)
Why require the keyword at all? What is gained?
It does help in revealing the intent of an interface, but it doesn't validate that intent, by guaranteeing that a function is usable in constant expressions. After writing a constexpr function, a programmer must still:
Write a test case or otherwise ensure it's actually used in a constant expression.
Document what parameter values are valid in a constant expression context.
Contrary to revealing intent, decorating functions with constexpr may add a false sense of security since tangential syntactic constraints are checked while ignoring the central semantic constraint.
In short: Would there be any undesirable effect on the language if constexpr in function declarations were merely optional? Or would there be any effect at all on any valid program?
Preventing client code expecting more than you're promising
Say I'm writing a library and have a function in there that currently returns a constant:
awesome_lib.hpp:
inline int f() { return 4; }
If constexpr wasn't required, you - as the author of client code - might go away and do something like this:
client_app.cpp:
#include <awesome_lib.hpp>
#include <array>
std::array<int, f()> my_array; // needs CT template arg
int my_c_array[f()]; // needs CT array dimension
Then should I change f() to say return the value from a config file, your client code would break, but I'd have no idea that I'd risked breaking your code. Indeed, it might be only when you have some production issue and go to recompile that you find this additional issue frustrating your rebuilding.
By changing only the implementation of f(), I'd have effectively changed the usage that could be made of the interface.
Instead, C++11 onwards provide constexpr so I can denote that client code can have a reasonable expectation of the function remaining a constexpr, and use it as such. I'm aware of and endorsing such usage as part of my interface. Just as in C++03, the compiler continues to guarantee client code isn't built to depend on other non-constexpr functions to prevent the "unwanted/unknown dependency" scenario above; that's more than documentation - it's compile time enforcement.
It's noteworthy that this continues the C++ trend of offering better alternatives for traditional uses of preprocessor macros (consider #define F 4, and how the client programmer knows whether the lib programmer considers it fair game to change to say #define F config["f"]), with their well-known "evils" such as being outside the language's namespace/class scoping system.
Why isn't there a diagnostic for "obviously" never-const functions?
I think the confusion here is due to constexpr not proactively ensuring there is any set of arguments for which the result is actually compile-time const: rather, it requires the programmer to take responsibility for that (otherwise §7.1.5/5 in the Standard deems the program ill-formed but doesn't require the compiler to issue a diagnostic). Yes, that's unfortunate, but it doesn't remove the above utility of constexpr.
So, perhaps it's helpful to switch from the question "what's the point of constexpr" to consider "why can I compile a constexpr function that can never actually return a const value?".
Answer: because there'd be a need for exhaustive branch analysis that could involve any number of combinations. It could be excessively costly in compile time and/or memory - even beyond the capability of any imaginable hardware - to diagnose. Further, even when it is practical having to diagnose such cases accurately is a whole new can of worms for compiler writers (who have better uses for their time). There would also be implications for the program such as the definition of functions called from within the constexpr function needing to be visible when the validation was performed (and functions that function calls etc.).
Meanwhile, lack of constexpr continues to forbid use as a const value: the strictness is on the sans-constexpr side. That's useful as illustrated above.
Comparison with non-`const` member functions
constexpr prevents int x[f()] while lack of const prevents const X x; x.f(); - they're both ensuring client code doesn't hardcode unwanted dependency
in both cases, you wouldn't want the compiler to determine const[expr]-ness automatically:
you wouldn't want client code to call a member function on a const object when you can already anticipate that function will evolve to modify the observable value, breaking the client code
you wouldn't want a value used as a template parameter or array dimension if you already anticipated it later being determined at runtime
they differ in that the compiler enforces const use of other members within a const member function, but does not enforce a compile-time constant result with constexpr (due to practical compiler limitations)
When I pressed Richard Smith, a Clang author, he explained:
The constexpr keyword does have utility.
It affects when a function template specialization is instantiated (constexpr function template specializations may need to be instantiated if they're called in unevaluated contexts; the same is not true for non-constexpr functions since a call to one can never be part of a constant expression). If we removed the meaning of the keyword, we'd have to instantiate a bunch more specializations early, just in case the call happens to be a constant expression.
It reduces compilation time, by limiting the set of function calls that implementations are required to try evaluating during translation. (This matters for contexts where implementations are required to try constant expression evaluation, but it's not an error if such evaluation fails -- in particular, the initializers of objects of static storage duration.)
This all didn't seem convincing at first, but if you work through the details, things do unravel without constexpr. A function need not be instantiated until it is ODR-used, which essentially means used at runtime. What is special about constexpr functions is that they can violate this rule and require instantiation anyway.
Function instantiation is a recursive procedure. Instantiating a function results in instantiation of the functions and classes it uses, regardless of the arguments to any particular call.
If something went wrong while instantiating this dependency tree (potentially at significant expense), it would be difficult to swallow the error. Furthermore, class template instantiation can have runtime side-effects.
Given an argument-dependent compile-time function call in a function signature, overload resolution may incur instantiation of function definitions merely auxiliary to the ones in the overload set, including the functions that don't even get called. Such instantiations may have side effects including ill-formedness and runtime behavior.
It's a corner case to be sure, but bad things can happen if you don't require people to opt-in to constexpr functions.
We can live without constexpr, but in certain cases it makes the code easier and intuitive.
For example we have a class which declares an array with some reference length:
template<typename T, size_t SIZE>
struct MyArray
{
T a[SIZE];
};
Conventionally you might declare MyArray as:
int a1[100];
MyArray<decltype(*a1), sizeof(a1)/sizeof(decltype(a1[0]))> obj;
Now see how it goes with constexpr:
template<typename T, size_t SIZE>
constexpr
size_t getSize (const T (&a)[SIZE]) { return SIZE; }
int a1[100];
MyArray<decltype(*a1), getSize(a1)> obj;
In short, any function (e.g. getSize(a1)) can be used as template argument only if the compiler recognizes it as constexpr.
constexpr is also used to check the negative logic. It ensures that a given object is at compile time. Here is the reference link e.g.
int i = 5;
const int j = i; // ok, but `j` is not at compile time
constexprt int k = i; // error
Without the keyword, the compiler cannot diagnose mistakes. The compiler would not be able to tell you that the function is an invalid syntactically as aconstexpr. Although you said this provides a "false sense of security", I believe it is better to pick up these errors as early as possible.

Why it was decided to decorate functions with constexpr? [duplicate]

C++11 allows functions declared with the constexpr specifier to be used in constant expressions such as template arguments. There are stringent requirements about what is allowed to be constexpr; essentially such a function encapsulates only one subexpression and nothing else. (Edit: this is relaxed in C++14 but the question stands.)
Why require the keyword at all? What is gained?
It does help in revealing the intent of an interface, but it doesn't validate that intent, by guaranteeing that a function is usable in constant expressions. After writing a constexpr function, a programmer must still:
Write a test case or otherwise ensure it's actually used in a constant expression.
Document what parameter values are valid in a constant expression context.
Contrary to revealing intent, decorating functions with constexpr may add a false sense of security since tangential syntactic constraints are checked while ignoring the central semantic constraint.
In short: Would there be any undesirable effect on the language if constexpr in function declarations were merely optional? Or would there be any effect at all on any valid program?
Preventing client code expecting more than you're promising
Say I'm writing a library and have a function in there that currently returns a constant:
awesome_lib.hpp:
inline int f() { return 4; }
If constexpr wasn't required, you - as the author of client code - might go away and do something like this:
client_app.cpp:
#include <awesome_lib.hpp>
#include <array>
std::array<int, f()> my_array; // needs CT template arg
int my_c_array[f()]; // needs CT array dimension
Then should I change f() to say return the value from a config file, your client code would break, but I'd have no idea that I'd risked breaking your code. Indeed, it might be only when you have some production issue and go to recompile that you find this additional issue frustrating your rebuilding.
By changing only the implementation of f(), I'd have effectively changed the usage that could be made of the interface.
Instead, C++11 onwards provide constexpr so I can denote that client code can have a reasonable expectation of the function remaining a constexpr, and use it as such. I'm aware of and endorsing such usage as part of my interface. Just as in C++03, the compiler continues to guarantee client code isn't built to depend on other non-constexpr functions to prevent the "unwanted/unknown dependency" scenario above; that's more than documentation - it's compile time enforcement.
It's noteworthy that this continues the C++ trend of offering better alternatives for traditional uses of preprocessor macros (consider #define F 4, and how the client programmer knows whether the lib programmer considers it fair game to change to say #define F config["f"]), with their well-known "evils" such as being outside the language's namespace/class scoping system.
Why isn't there a diagnostic for "obviously" never-const functions?
I think the confusion here is due to constexpr not proactively ensuring there is any set of arguments for which the result is actually compile-time const: rather, it requires the programmer to take responsibility for that (otherwise §7.1.5/5 in the Standard deems the program ill-formed but doesn't require the compiler to issue a diagnostic). Yes, that's unfortunate, but it doesn't remove the above utility of constexpr.
So, perhaps it's helpful to switch from the question "what's the point of constexpr" to consider "why can I compile a constexpr function that can never actually return a const value?".
Answer: because there'd be a need for exhaustive branch analysis that could involve any number of combinations. It could be excessively costly in compile time and/or memory - even beyond the capability of any imaginable hardware - to diagnose. Further, even when it is practical having to diagnose such cases accurately is a whole new can of worms for compiler writers (who have better uses for their time). There would also be implications for the program such as the definition of functions called from within the constexpr function needing to be visible when the validation was performed (and functions that function calls etc.).
Meanwhile, lack of constexpr continues to forbid use as a const value: the strictness is on the sans-constexpr side. That's useful as illustrated above.
Comparison with non-`const` member functions
constexpr prevents int x[f()] while lack of const prevents const X x; x.f(); - they're both ensuring client code doesn't hardcode unwanted dependency
in both cases, you wouldn't want the compiler to determine const[expr]-ness automatically:
you wouldn't want client code to call a member function on a const object when you can already anticipate that function will evolve to modify the observable value, breaking the client code
you wouldn't want a value used as a template parameter or array dimension if you already anticipated it later being determined at runtime
they differ in that the compiler enforces const use of other members within a const member function, but does not enforce a compile-time constant result with constexpr (due to practical compiler limitations)
When I pressed Richard Smith, a Clang author, he explained:
The constexpr keyword does have utility.
It affects when a function template specialization is instantiated (constexpr function template specializations may need to be instantiated if they're called in unevaluated contexts; the same is not true for non-constexpr functions since a call to one can never be part of a constant expression). If we removed the meaning of the keyword, we'd have to instantiate a bunch more specializations early, just in case the call happens to be a constant expression.
It reduces compilation time, by limiting the set of function calls that implementations are required to try evaluating during translation. (This matters for contexts where implementations are required to try constant expression evaluation, but it's not an error if such evaluation fails -- in particular, the initializers of objects of static storage duration.)
This all didn't seem convincing at first, but if you work through the details, things do unravel without constexpr. A function need not be instantiated until it is ODR-used, which essentially means used at runtime. What is special about constexpr functions is that they can violate this rule and require instantiation anyway.
Function instantiation is a recursive procedure. Instantiating a function results in instantiation of the functions and classes it uses, regardless of the arguments to any particular call.
If something went wrong while instantiating this dependency tree (potentially at significant expense), it would be difficult to swallow the error. Furthermore, class template instantiation can have runtime side-effects.
Given an argument-dependent compile-time function call in a function signature, overload resolution may incur instantiation of function definitions merely auxiliary to the ones in the overload set, including the functions that don't even get called. Such instantiations may have side effects including ill-formedness and runtime behavior.
It's a corner case to be sure, but bad things can happen if you don't require people to opt-in to constexpr functions.
We can live without constexpr, but in certain cases it makes the code easier and intuitive.
For example we have a class which declares an array with some reference length:
template<typename T, size_t SIZE>
struct MyArray
{
T a[SIZE];
};
Conventionally you might declare MyArray as:
int a1[100];
MyArray<decltype(*a1), sizeof(a1)/sizeof(decltype(a1[0]))> obj;
Now see how it goes with constexpr:
template<typename T, size_t SIZE>
constexpr
size_t getSize (const T (&a)[SIZE]) { return SIZE; }
int a1[100];
MyArray<decltype(*a1), getSize(a1)> obj;
In short, any function (e.g. getSize(a1)) can be used as template argument only if the compiler recognizes it as constexpr.
constexpr is also used to check the negative logic. It ensures that a given object is at compile time. Here is the reference link e.g.
int i = 5;
const int j = i; // ok, but `j` is not at compile time
constexprt int k = i; // error
Without the keyword, the compiler cannot diagnose mistakes. The compiler would not be able to tell you that the function is an invalid syntactically as aconstexpr. Although you said this provides a "false sense of security", I believe it is better to pick up these errors as early as possible.

Why do we need to mark functions as constexpr?

C++11 allows functions declared with the constexpr specifier to be used in constant expressions such as template arguments. There are stringent requirements about what is allowed to be constexpr; essentially such a function encapsulates only one subexpression and nothing else. (Edit: this is relaxed in C++14 but the question stands.)
Why require the keyword at all? What is gained?
It does help in revealing the intent of an interface, but it doesn't validate that intent, by guaranteeing that a function is usable in constant expressions. After writing a constexpr function, a programmer must still:
Write a test case or otherwise ensure it's actually used in a constant expression.
Document what parameter values are valid in a constant expression context.
Contrary to revealing intent, decorating functions with constexpr may add a false sense of security since tangential syntactic constraints are checked while ignoring the central semantic constraint.
In short: Would there be any undesirable effect on the language if constexpr in function declarations were merely optional? Or would there be any effect at all on any valid program?
Preventing client code expecting more than you're promising
Say I'm writing a library and have a function in there that currently returns a constant:
awesome_lib.hpp:
inline int f() { return 4; }
If constexpr wasn't required, you - as the author of client code - might go away and do something like this:
client_app.cpp:
#include <awesome_lib.hpp>
#include <array>
std::array<int, f()> my_array; // needs CT template arg
int my_c_array[f()]; // needs CT array dimension
Then should I change f() to say return the value from a config file, your client code would break, but I'd have no idea that I'd risked breaking your code. Indeed, it might be only when you have some production issue and go to recompile that you find this additional issue frustrating your rebuilding.
By changing only the implementation of f(), I'd have effectively changed the usage that could be made of the interface.
Instead, C++11 onwards provide constexpr so I can denote that client code can have a reasonable expectation of the function remaining a constexpr, and use it as such. I'm aware of and endorsing such usage as part of my interface. Just as in C++03, the compiler continues to guarantee client code isn't built to depend on other non-constexpr functions to prevent the "unwanted/unknown dependency" scenario above; that's more than documentation - it's compile time enforcement.
It's noteworthy that this continues the C++ trend of offering better alternatives for traditional uses of preprocessor macros (consider #define F 4, and how the client programmer knows whether the lib programmer considers it fair game to change to say #define F config["f"]), with their well-known "evils" such as being outside the language's namespace/class scoping system.
Why isn't there a diagnostic for "obviously" never-const functions?
I think the confusion here is due to constexpr not proactively ensuring there is any set of arguments for which the result is actually compile-time const: rather, it requires the programmer to take responsibility for that (otherwise §7.1.5/5 in the Standard deems the program ill-formed but doesn't require the compiler to issue a diagnostic). Yes, that's unfortunate, but it doesn't remove the above utility of constexpr.
So, perhaps it's helpful to switch from the question "what's the point of constexpr" to consider "why can I compile a constexpr function that can never actually return a const value?".
Answer: because there'd be a need for exhaustive branch analysis that could involve any number of combinations. It could be excessively costly in compile time and/or memory - even beyond the capability of any imaginable hardware - to diagnose. Further, even when it is practical having to diagnose such cases accurately is a whole new can of worms for compiler writers (who have better uses for their time). There would also be implications for the program such as the definition of functions called from within the constexpr function needing to be visible when the validation was performed (and functions that function calls etc.).
Meanwhile, lack of constexpr continues to forbid use as a const value: the strictness is on the sans-constexpr side. That's useful as illustrated above.
Comparison with non-`const` member functions
constexpr prevents int x[f()] while lack of const prevents const X x; x.f(); - they're both ensuring client code doesn't hardcode unwanted dependency
in both cases, you wouldn't want the compiler to determine const[expr]-ness automatically:
you wouldn't want client code to call a member function on a const object when you can already anticipate that function will evolve to modify the observable value, breaking the client code
you wouldn't want a value used as a template parameter or array dimension if you already anticipated it later being determined at runtime
they differ in that the compiler enforces const use of other members within a const member function, but does not enforce a compile-time constant result with constexpr (due to practical compiler limitations)
When I pressed Richard Smith, a Clang author, he explained:
The constexpr keyword does have utility.
It affects when a function template specialization is instantiated (constexpr function template specializations may need to be instantiated if they're called in unevaluated contexts; the same is not true for non-constexpr functions since a call to one can never be part of a constant expression). If we removed the meaning of the keyword, we'd have to instantiate a bunch more specializations early, just in case the call happens to be a constant expression.
It reduces compilation time, by limiting the set of function calls that implementations are required to try evaluating during translation. (This matters for contexts where implementations are required to try constant expression evaluation, but it's not an error if such evaluation fails -- in particular, the initializers of objects of static storage duration.)
This all didn't seem convincing at first, but if you work through the details, things do unravel without constexpr. A function need not be instantiated until it is ODR-used, which essentially means used at runtime. What is special about constexpr functions is that they can violate this rule and require instantiation anyway.
Function instantiation is a recursive procedure. Instantiating a function results in instantiation of the functions and classes it uses, regardless of the arguments to any particular call.
If something went wrong while instantiating this dependency tree (potentially at significant expense), it would be difficult to swallow the error. Furthermore, class template instantiation can have runtime side-effects.
Given an argument-dependent compile-time function call in a function signature, overload resolution may incur instantiation of function definitions merely auxiliary to the ones in the overload set, including the functions that don't even get called. Such instantiations may have side effects including ill-formedness and runtime behavior.
It's a corner case to be sure, but bad things can happen if you don't require people to opt-in to constexpr functions.
We can live without constexpr, but in certain cases it makes the code easier and intuitive.
For example we have a class which declares an array with some reference length:
template<typename T, size_t SIZE>
struct MyArray
{
T a[SIZE];
};
Conventionally you might declare MyArray as:
int a1[100];
MyArray<decltype(*a1), sizeof(a1)/sizeof(decltype(a1[0]))> obj;
Now see how it goes with constexpr:
template<typename T, size_t SIZE>
constexpr
size_t getSize (const T (&a)[SIZE]) { return SIZE; }
int a1[100];
MyArray<decltype(*a1), getSize(a1)> obj;
In short, any function (e.g. getSize(a1)) can be used as template argument only if the compiler recognizes it as constexpr.
constexpr is also used to check the negative logic. It ensures that a given object is at compile time. Here is the reference link e.g.
int i = 5;
const int j = i; // ok, but `j` is not at compile time
constexprt int k = i; // error
Without the keyword, the compiler cannot diagnose mistakes. The compiler would not be able to tell you that the function is an invalid syntactically as aconstexpr. Although you said this provides a "false sense of security", I believe it is better to pick up these errors as early as possible.

Templates compilation: gcc vs VS2010

From the book - C++ Templates: The Complete Guide by David, Nicolai
Thus, templates are compiled twice:
Without instantiation, the template code itself is checked for correct syntax. Syntax errors are discovered, such as missing
semicolons.
At the time of instantiation, the template code is checked to ensure that all calls are valid. Invalid calls are discovered, such as
unsupported function calls.
Keeping the first point, I wrote -
template<typename T>
void foo( T x)
{
some illegal text
}
int main()
{
return 0;
}
It build fine on Visual Studio 2010 with out any warnings with optimizations turned off. How ever, it failed on gcc-4.3.4. Which one is complying to the C++ standard ? Is it mandatory for template code to get compiled even with out template instantiation ?
The program in question is ill-formed, but the C++ standard does not require a diagnostic in this case, so both Visual Studio and GCC are behaving in a compliant fashion. From §14.6/7 of the C++03 standard (emphasis mine):
Knowing which names are type names allows the syntax of every template definition to be checked. No
diagnostic shall be issued for a template definition for which a valid specialization can be generated. If no
valid specialization can be generated for a template definition, and that template is not instantiated, the template
definition is ill-formed, no diagnostic required. If a type used in a non-dependent name is incomplete
at the point at which a template is defined but is complete at the point at which an instantiation is done, and
if the completeness of that type affects whether or not the program is well-formed or affects the semantics
of the program, the program is ill-formed; no diagnostic is required. [Note: if a template is instantiated,
errors will be diagnosed according to the other rules in this Standard. Exactly when these errors are diagnosed
is a quality of implementation issue. ] [Example:
int j;
template<class T> class X {
// ...
void f(T t, int i, char* p)
{
t = i; // diagnosed if X::f is instantiated
// and the assignment to t is an error
p = i; // may be diagnosed even if X::f is
// not instantiated
p = j; // may be diagnosed even if X::f is
// not instantiated
}
void g(T t) {
+; //may be diagnosed even if X::g is
// not instantiated
}
};
—end example]
The book you're looking at seems to reflect (mostly) that author's observations about how compilers really work, not the requirement(s) of the standard. The standard doesn't really say much to give extra leniency to a compiler about ill-formed code inside a template, just because it's not instantiated.
At the same time, the book is correct that there are some things a compiler really can't do much about checking until it's instantiated. For example, you might use a dependent name as the name of a function (or invoke it like a function, anyway -- if it's functor instead that would be fine too). If you instantiate that template over a class where it is a function, fine and well. If you instantiate it over a class where it's really an int, attempting to call it will undoubtedly fail. Until you've instantiated it, the compiler can't tell which is which though.
This is a large part of what concepts were really intended to add to C++. You could directly specify (for example) that template X will invoke T::y like a function. Then the compiler could compare the content of the template with the declarations in the concept, and determine whether the body of the template fit with the declaration in the concept or not. In the other direction, the compiler only needed to compare a class (or whatever) to the concept to determine whether instantiating that template would work. If it wasn't going to work, it could report the error directly as a violation of the relevant concept (as it is now, it tries to instantiate the template, and you frequently get some strange error message that indicates the real problem poorly, if at all).

Why is this C++ explicit template specialization code illegal?

(Note: I know how it is illegal, I'm looking for the reason that the language make it so.)
template<class c> void Foo(); // Note: no generic version, here or anywhere.
int main(){
Foo<int>();
return 0;
}
template<> void Foo<int>();
Error:
error: explicit specialization of 'Foo<int>' after instantiation
A quick pass with Google found this citation of the spec but that offers only the what and not the why.
Edit:
Several responses have forwarded the argument (e.g. confirmed my speculation) that the rule is this way because to do otherwise would violate the One Definition Rule (ODR). However, this is a very weak argument because it doesn't hold, in this case, for two reaons:
Moving the explicit specialization to another translation unit solves the problem and doesn't seem to violate the ODR (or so the linker says).
The short form of the ODR (as applied to functions) is that you can't have more than one body for any given function, and I don't. The only place the body of the function is ever defined is in the explicit specialization, so the call to Foo<int> can't define a generic specialization of the template because there is no generic body to be specialized.
Speculation on the matter:
A guess as to why the rule exist at all: if the first line offered a definition (as opposed to a declaration), an explicit specialization after an instantiation would be a problem because you would get multiple definitions. But in this case, the only definition in sight is the explicit specialization.
Oddly the following (or something like it in the real code I'm working on) works:
File A:
template<class c> void Foo();
int main(){
Foo<int>();
return 0;
}
File B:
template<class c> void Foo();
template<> void Foo<int>();
But to use that in general starts to create a spaghetti imports structure.
A guess as to why the rule exist at
all: if the first line offered a
definition (as opposed to a
declaration), an explicit
specialization after an instantiation
would be a problem because you would
get multiple definitions. But in this
case, the only definition in sight is
the explicit specialization.
But you do have multiple definitions. You have already defined Foo< int > when you instantiate it and after that you try to specialize the template function for int, which is already defined.
int main(){
Foo<int>(); // Define Foo<int>();
return 0;
}
template<> void Foo<int>(); // Trying to specialize already defined Foo<int>
This code is illegal because explicit specialization appears after the instantiation. Basically, the compiler first saw the generic template, then it saw its instantiation and specialized that generic template with a specific type. After that it saw a specific implementation of the generic template that was already instantiated. So what is the compiler supposed to do? Go back and re-compile the code? That's why it is not allowed to do that.
You have to think of an explicit specialization as a function declaration. Just like if you had two overloaded functions (non-templated), if only one declaration can be found before you try to make a call to the second version, the compiler is going to say that it cannot find the required overloaded version. The difference with templates is that the compiler can generate that specialization based on the general function template. So, why is it forbidden to do this? Because the full template specialization violates the ODR when it is seen, since, by that time, there already exists a template specialization for the same type. When a template is instantiated (implicitly or not), the corresponding template specialization is also created, such that later use (in the same translation unit) of the same specialization will be able to reuse the instantiation and not duplicate the template code for every instantiations. Obviously, the ODR applies just as well to template specializations as it applies elsewhere.
So, when the quoted text says "no diagnostic is required", it just means that the compiler is not required to provide you with the insightful remark that the problem is due to an instantiation of the template occurring sometime before the explicit specialization. But, if it doesn't do that, the other option is to give the standard ODR violation error, i.e., "multiple definitions of 'Foo' specialization for [T = int]" or something like that, which wouldn't be as helpful as the more clever diagnostic.
RESPONSE TO EDIT
1) Although the saying goes that all template function definitions (i.e. implementation) must be visible at the point of instantiation (such that the compiler can substitute the template argument(s) and instantiate the function template). However, implicit instantiation of the function template only requires that the declaration of the function be available. So, in your case, splitting it into two translation units works, because it does not violate ODR (since in that TU, there is only one declaration of Foo<int>), the declaration if Foo<int> is available at the implicit instantiation point (through Foo<T>), and the definition of Foo<int> is available to the linker within TU B. So, no one has argued that this second example is "not supposed to work", it works as it is supposed to. Your question is about a rule that applies within one translation unit, don't counter the arguments by saying that the error doesn't occur when you split it into two TUs (especially when it clearly should work fine in two TUs, according to the rules).
2) In your first example, either there will be an error because the compiler cannot find the general function template (the non-specialized implementation) and thus cannot instantiate Foo<int> from the general template. Or, the compiler will find a definition for the general template, use it to instantiate Foo<int>, and then throw an error because a second template specialization Foo<int> is encountered. You seem to think that the compiler will find your specialization before it gets to it, it doesn't. C++ compilers compile the code from top to bottom, they don't go back and forth to substitute stuff here and there. When the compiler gets to the first use of Foo<int>, it sees only the general template at that point, assumes there will be an implementation of that general template that can be used to instantiate Foo<int>, it is not expecting a specialized implementation for Foo<int>, it is expecting and will use the general one. Then, it sees the specialization and throws the error, because it already had made its mind that the general version was to be used, so, it does see two distinct definitions for the same function, and yes, it does violate ODR. It's as simple as that.
WHY OH WHY!!!
The 2 TU case has to work because you should be able to share template instantiations between TUs, that's a feature of C++, and a useful one (in case when you have a small number of possible instantiations, you can pre-compile them).
The 1 TU case cannot be allowed because declaring something in C++ tells the compiler "there is this thing defined somewhere". You tell the compiler "there is a general definition of the template somewhere", then say "I want to use the general definition to make the function Foo<int>", and finally, you say "whenever Foo<int> is called, it should use this special definition". That's a flat out contradiction! That's why the ODR exists and applies to this context, to forbid such contradictions. It doesn't matter whether the general definition "to-be-found" is not present, the compiler expects it, and it has to assume that it does exist and that it is different from the specialization. It cannot go on and say "ok, so, I'll look everywhere else in the code for the general definition, and if I cannot find it, then I will come back and 'approve' this specialization to be used instead of the general definition, but if I find it I will flag this specialization as an error". Nor can it go on and flatly ignore the desire of the programmer by changing code that clear shows intent to use the general template (since the specialization is not declared yet), for code that uses a specialization that appears later. I can't possibly explain the "why" any more clearly than that.
The 2 TU case is completely different. When the compiler is compiling TU A (that uses Foo<int>), it will look for the general definition, fail to find it, assume that it will be linked-in later as Foo<int>, and leaves a symbol placeholder. Then, since the linker will not look for templates (templates are NOT exportable, in practice), it will look for a function that implements Foo<int>, and it doesn't care whether it is a specialized version or not. The linker is happy as long as it finds the same symbol to link to. This is so, because it would be a nightmare to do it otherwise (look up discussions on "exported templates") for both the programmers (not being able to easily change functions in their compiled libraries) and for the compiler vendors (having to implement this linking crazy scheme).