Consider this simple check for whether a (global) function is defined:
template <typename T>
concept has_f = requires ( const T& t ) { Function( t ); };
// later use in MyClass<T>:
if constexpr ( has_f<T> ) Function( value );
unfortunately this allows for implicit conversions. This is obviously a big risk for mess-ups.
Question: How to check if Function( const T& t ) 'explicitly' exists?
Something like
if constexpr ( std::is_same_v<decltype( Function( t ) ), void> )
should be free of implict conversions, but I can't get it working.
Note: The point of the concept approach was to get rid of old 'detection patterns' and simplify.
Before explaining how to do this, I will explain why you shouldn't want to do any of this.
You mentioned "old 'detection patterns'" without adding any specifics as to what you are referring to. There are a fair number of idioms C++ users sometimes employ that can do something like detecting if a function takes a particular parameter. Which ones of these count as "detection patterns" by your reckoning is not known.
However, the vast majority of these idioms exist to serve a specific, singular purpose: to see if a particular function call with a given set of arguments is valid, legal C++ code. They don't really care if a function exactly takes T; testing for T specifically is just how a few of those idioms work to produce the important information. Namely whether you can pass a T to said function.
Looking for a specific function signature was almost always a means to an end, not the final goal.
Concepts, particularly requires expressions, is the end itself. It allows you to ask the question directly. Because really, you don't care if Function has a parameter that takes a T; you care whether Function(t) is legitimate code or not. Exactly how that happens is an implementation detail.
The only reason I can think of that someone might want to constrain a template on an exact signature (rather than an argument match) is to defeat implicit conversion. But you really shouldn't try to break basic language features like that. If someone writes a type that is implicitly convertible to another, they have the right to the benefits of that conversion, as defined by the language. Namely, the ability to use it in many ways as if it were that other type.
That is, if Function(t) is what your constrained template code is actually going to do, then the user of that template has every right to provide code that makes that compiler within the limits of the C++ language. Not within the limits of your personal ideas of what features are good or bad in that language.
Concepts are not like base classes, where you decide the exact signature for each method and the user must strictly abide by that. Concepts are patterns that constrain template definitions. Expressions in concept constraints are expressions that you expect to use in your template. You only put an expression in a concept if you plan on using it in your templates constrained by that concept.
You don't use a function signature; you call functions. So you constrain a concept on what functions can be called with which arguments. You're saying "you must let me do this", not "provide this signature".
That having been said... what you want is not generally possible ;)
There are several mechanisms that you might employ to achieve it, but none of them do exactly what you want in all cases.
The name of a function resolves to an overload set consisting of all of the functions that could be called. This name can be converted into a pointer to a specific function signature if and only if that signature is one of the functions in the overload set. So in theory, you might do this:
template <typename T>
concept has_f = requires () { static_cast<void (*)(T const&)>(&Function); };
However, because the name Function is not dependent on T (as far as C++ is concerned), it must be resolved during the first pass of two-phase name lookup for templates. That means any and all Function overloads you intend to care about have to be declared before has_f is defined, not merely instantiated with an appropriate T.
I think this is sufficient to declare that this is non-functional as a solution. Even if it worked though, it would only "work" given 3 circumstances:
Function is known/required to be an actual function, rather than a global object with an operator() overload. So if a provider of T wants to provide a global functor instead of a regular function (for any number of reasons) this method will not work, even though Function(t) is 100% perfectly valid, legitimate, and does none of those terrible implicit conversions that for some reason must be stopped.
The expression Function(t) is not expected to use ADL to find the actual Function to call.
Function is not a template function.
And not one of these possibilities has anything to do with implicit conversions. If you're going to call Function(t), then it's 100% OK for ADL to find it, template argument deduction to instantiate it, or for the user to fulfill this with some global lambda.
Your second-best bet is to rely on how overload resolution works. C++ only permits a single user-defined conversion in operator overloading. As such, you can create a type which will consume that one user-defined conversion in the function call expression in lieu of T. And that conversion should be a conversion to T itself.
You would use it like this:
template<typename T>
class udc_killer
{
public:
//Will never be called.
operator T const&();
};
template <typename T>
concept has_f = requires () { Function(udc_killer<T>{}); };
This of course still leaves the standard conversions, so you can't differentiate between a function taking a float if T is int, or derived classes from bases. You also can't detect if Function has any default parameters after the first one.
Overall, you're still not detecting the signature, merely call-ability. Because that's all you should care about to begin with.
Related
C++20 introduces concepts, which allows us to specify in the declaration of a template that the template parameters must provide certain capabilities. If a template is instantiated with a type that does not satisfy the constraints, compilation will fail at instantiation instead of while compiling the template's body and noticing an invalid expression after substitution.
This is great, but it begs the question: is there a way to have the compiler look at the template body, before instantiation (i.e. looking at it as a template and not a particular instantiation of a template), and check that all the expressions involving template parameters are guaranteed by the constraints to exist?
Example:
template<typename T>
concept Fooer = requires(T t)
{
{ t.foo() };
};
template<Fooer F>
void callFoo(F&& fooer)
{
fooer.foo();
}
The concept prevents me from instantiating callFoo with a type that doesn't support the expression that's inside the template body. However, if I change the function to this:
template<Fooer F>
void callFoo(F&& fooer)
{
fooer.foo();
fooer.bar();
}
This will fail if I instantiate callFoo with a type that defines foo (and therefore satisfies the constraints) but not bar. In principal, the concept should enable the compiler to look at this template and reject it before instantiation because it includes the expression fooer.bar(), which is not guaranteed by the constraint to exist.
I assume there's probably backward compatibility issues with doing this, although if this validation is only done with parameters that are constrained (not just typename/class/etc. parameters), it should only affect new code.
This could be very useful because the resulting errors could be used to guide the design of constraints. Write the template implementation, compile (with no instantiations yet), then on each error, add whatever requirement is needed to the constraint. Or, in the opposite direction, when hitting an error, adjust the implementation to use only what the constraints provide.
Do any compilers support an option to enable this type of validation, or is there a plan to add this at any point? Is it part of the specification for concepts to do this validation, now or in the future?
Do any compilers support an option to enable this type of validation, or is there a plan to add this at any point? Is it part of the specification for concepts to do this validation, now or in the future?
No, no, and no.
The feature you're looking for is called definition checking. That is, the compiler checks the definition of the template at the point of its definition based on the provided concepts, and issues errors if anything doesn't validate. This is how, for instance, Rust Traits, Swift Protocols, and Haskell Typeclasses work.
But C++ concepts don't work like that, and it seems completely infeasible to ever add support for such a thing given that C++ concepts can be arbitrary expressions rather than function signatures (as they are in other languages).
The best you can do is thoroughly unit test your templates with aggressively exotic types that meet your requirements as minimally as possible (the term here is archetype) and hope for the best.
TL;DR: no.
The design for the original C++11 concepts included validation. But when that was abandoned, the new version was designed to be much more narrow in scope. The new design was originally built on constexpr boolean conditions. The eventual requires expression was added to make these boolean checks easier to write and to bring some sanity to relationships between concepts.
But the fundamentals of the design of C++20 concepts makes it basically impossible to do full validation. Even if a concept is built entirely out of atomic requires expressions, there isn't a way to really tell if an expression is being used exactly in the code the way it is in the requires expression.
For example, consider this concept:
template<typename T, typename U>
concept func_to_u = requires(T const t)
{
{t.func()} -> std::convertible_to<U>;
};
Now, let's imagine the following template:
template<typename T, typename U> requires func_to_u<T, U>
void foo(T const &t)
{
std::optional<U> u(std::in_place, t.func());
}
If you look at std::optional, you find that the in_place_t constructor doesn't take a U. So... is this a legitimate use of that concept? After all, the concept says that code guarded by this concept will call func() and will convert the result to a U. But this template does not do this.
It instead takes the return type, instantiates a template that is not guarded by func_to_u, and that template does whatever it wants. Now, it turns out that this template does perform a conversion operation to U.
So on the one hand, it's clear that our code does conform to the intent of func_to_u. But that is only because it happened to pass the result to some other function that conformed to the func_to_u concept. But that template had no idea it was subject to the limitations of convertible_to<U>.
So... how is the compiler supposed to detect whether this is OK? The trigger condition for failure would be somewhere in optional's constructor. But that constructor is not subject to the concept; it's our outer code that is subject to the concept. So the compiler would basically have to unwind every template your code uses and apply the concept to it. Only it wouldn't even be applying the whole concept; it would just be applying the convertible_to<U> part.
The complexity of doing that quickly spirals out of control.
I wonder if there is a reason why the std::sto series (e.g. std::stoi, std::stol) is not a function template, like that:
template<typename T>
T sto(std::string const & str, std::size_t *pos = 0, int base = 10);
and then:
template<>
int sto<int>(std::string const & str, std::size_t *pos, int base)
{
// do the stuff.
}
template<>
long sto<long>(std::string const & str, std::size_t *pos, int base)
{
// do the stuff.
}
/* etc. */
In my sense, that would be a better design, because for the moment, when I have to convert a string in whatever numerical value an user want, I have to manually manage each case.
Is there a reason to not have such a template function? Is there an assumed choice, or is this just done like that?
Looking at the description of these functions at cppref, I note the following:
... Interprets a signed integer value in the string str.
1) calls std::strtol(str.c_str(), &ptr, base)...
and strol a "C" standard function that's also available in C++.
Reading further, we see: (for the c++ sto* functions):
Return value
The string converted to the specified signed integer type.
Exceptions
std::invalid_argument if no conversion could be performed
std::out_of_range if the converted value would fall out of the range of the result type or if the underlying function (std::strtol or
std::strtoll) sets errno to ERANGE.
So while I have no original source for this, and indeed have never worked with these functions, I would guess that:
TL;DR : These functions are C++-ish wrappers around already existing C/C++ functions -- strtol* -- so they resemble these functions as close as possible.
I have to manage manually each case. Is there a reason to not have such a template function?
In case of such questions, Eric Lippert (C#) usually says something along the lines:
If a feature is missing, then it's missing because noone implemented it yet. And that's because either noone else earlier wanted yet, or because it was considered not worth the effort, or because it couldn't have been finished before publishing the current release".
Here, I guess it's the "not worth" part, but I have neither asked the commitee about, nor managed to find any answer in old questions and faqs. I didn't spend much time searching though.
I say this because I suppose that most common of these functions' functionality (if not all of) is already contained in stream classes, like istringstream. Just like cin/etc, this one also has an all-having operator >>, overloaded for all base numeric types (and more).
Furthermore, the stream manipulators like std::hex (std::setbase) already solve the problem of passing various type-dependent configuration parameters to the actual conversion functions. No problems with mixed function signatures (like those mentioned by DavidHaim in his answer). Here's just a single operator>>.
So.. since if we have it in streams, if we already can read numbers/etc from strings with simple foo >> bar >> setbase(42) >> baz >> ..., then I think it was not worth the effort to add more complicated layers to old C runtime functions.
No proof for that though. Just a hunch.
The problem with template specialization is that the specialization requires you to match the original template function signature, so each specialization must implement the interface of (string,pos,base).
If you would like to have some other type which does not follows this interface, you are in trouble.
Suppose that, in the future, we would like to have sto<std::pair<int,int>>. We will want to have pos and base for the first and the second stringified integer. we would like the signature to be in the form of string,pos1,base1,pos2,base2. Since sto signature is already set, we cannot do it.
You can always wrap std::sto* in your implementation of sto for integral types, but you cannot do that the other way around.
The purpose of these functions is to provide simple conversions for common cases. They are not intended as a general-purpose conversion suite. std::ostringstream is much better for that kind of thing.
In my sense, there would be a better design, because for the moment,
when I have to convert a string in whatever numerical value an user
want, I have to manage manually each case.
No, it would not. Templates goal (deliberately setting T-MP apart) is not to replace overloading; you should always prefer overloading to templates. Actually, it's something the language already does for you! Between a candidate function and a possible template instantation, the former will be prefered. Using language features for the sake of it is bad.
I don't see how templates could help either. Whatever type the user decides to input, it won't be known till runtime, and template types are deduced at compile time. C++ is a statically typed language. In this case, templates will just add an unneeded layer of complexity over normal function overloading.
while experimenting with function return type deduction
auto func();
int main() { func(); }
auto func() { return 0; }
error: use of ‘auto func()’ before deduction of ‘auto’
Is there a way to use this feature without needing to specify the definition before the call? With a large call tree, it becomes complicated to re-arrange functions so that their definition is seen before all of the places they are called. Surely an evaluation could be held off until a particular function definition was found and auto could then be deduced.
No, there is not.
Even ignoring the practical problems (requiring multi-pass compilation, ease of making undecidable return types via mutually recursive type definitions, difficulty in isolating source of compilation errors when everything resolves, etc), and the design issues (that forward declaration is nearly useless), C++11 was designed with ease of implementation in mind. Things that made it harder to write a compiler needed strong justification.
The myriad restrictions on auto mean that it was really easy to slide it into existing compilers: it is among the most supported C++11 features in my experience. C++14 relaxes many of the restrictions, but does not go nearly as far as you describe. Each relaxation requires justification and confidence that it will be worth the cost to compiler writers to implement.
I would not even want that feature at this time, as I like the signatures of my functions to be deducible at the point I call them, at the very least.
No, this simply isn't possible with C++'s compilation model. Remember that the definition of func may appear in a different file, or even inside a library somewhere. The return type must be known if you are going to use it.
The relevant paper is N3638 which prohibits use of functions declared with an auto return prior to knowning the return type. The paper actually makes a point, however, that as soon as the return type could be deduced from the function body it can also be called! Thus, a function with an auto return can actually be recursive.
I would avoid automatic deduction of the return type in functions in as much as you can. While it might appear to be a nice feature that eases the need to actually figure out the type, it is not a simple feature to use, and it has limitations (the return type cannot be used in an SFINAE context, it requires the instantiation of the function...)
The answer to your question is that the compiler cannot infer the type without seeing the definition, and the processing is always done in a top-down approach.
I have a library where template classes/functions often access explicit members of the input type, like this:
template <
typename InputType>
bool IsSomethingTrue(
InputType arg1) {
typename InputType::SubType1::SubType2 &a;
//Do something
}
Here, SubType1 and SubType2 are themselves generic types that were used to instantiate InputType. Is there a way to quickly find all the types in the library that are valid to pass in for InputType (likewise for SubType1 and SubType2)? So far I have just been searching the entire code base for classes containing the appropriate members, but the template input names are reused in a lot of places so it is very cumbersome.
From a coding perspective, what is the point of using a template like this when there is only a limited set of valid input types that are probably already defined? Why not just overload this function with explicit types rather than making them generic?
From a coding perspective, what is the point of using a template like this when there is only a limited set of valid input types that are probably already defined? Why not just overload this function with explicit types rather than making them generic?
First of all, because those overload would have the exact same body, or very similar ones. If the body of the function is long enough, having more versions of it is a problem for maintenance. When you need to change the algorithm, you now have to do it N times and hope you won't make mistakes. Most of the times, redundancy is bad.
Moreover, even though now there could be just a few such types which satisfy the syntactic requirements of your function, there may be more in future. Having a function template allows you to let your algorithm work with new types without the need to write a new overload every time one new such type is introduced.
The advantage of using generic types is not on the template end: if you're willing to explicitly name them and edit the template code every time, it's the same.
What happens, however, when you introduce a subclass or variant of a type accepted by the template? No modification needed on the other end.
In other words, when you say that all types are known beforehand, you are excluding code modifications and extensions, which is half the point of using templates.
Should I define an interface which explicitly informs the user what all he/she should implement in order to use the class as template argument or let the compiler warn him when the functionality is not implemented ?
template <Class C1, Class C2>
SomeClass
{
...
}
Class C1 has to implement certain methods and operators, compiler won't warn until they are used. Should I rely on compiler to warn or make sure that I do:
Class C1 : public SomeInterfaceEnforcedFunctions
{
// Class C1 has to implement them either way
// but this is explicit? am I right or being
// redundant ?
}
Ideally, you should use a concept to specify the requirements on the type used as a template argument. Unfortunately, neither the current nor the upcoming standard includes concepts.
Absent that, there are various methods available for enforcing such requirements. You might want to read Eric Neibler's article about how to enforce requirements on template arguments.
I'd agree with Eric's assertion that leaving it all to the compiler is generally unacceptable. It's much of the source of the horrible error messages most of us associate with templates, where seemingly trivial typos can result in pages of unreadable dreck.
If you are going to force an interface, then why use a template at all? You can simply do -
class SomeInterface //make this an interface by having pure virtual functions
{
public:
RType SomeFunction(Param1 p1, Param2 p2) = 0;
/*You don't have to know how this method is implemented,
but now you can guarantee that whoever wants to create a type
that is SomeInterface will have to implement SomeFunction in
their derived class.
*/
};
followed by
template <class C2>
class SomeClass
{
//use SomeInterface here directly.
};
Update -
A fundamental problem with this approach is that it only works for types that is rolled out by a user. If there is a standard library type that conforms to your interface specification, or a third party code or another library (like boost) that has classes that conform to SomeInterface, they won't work unless you wrap them in your own class, implement the interface and forward the calls appropriately. I'm somehow not liking my answer anymore.
Absent of concepts, a for now abandoned concept (pun not intended, but noted) for describing which requirements a template parameter must fulfill, the requirements are only enforced implicitly. That is, if whatever your users use as a template parameter doesn't fulfill them, the code won't compile. Unfortunately, the error message resulting from that are often quite gibberish. The only things you can do to improve matters is to
describe the requirements in your template's documentation
insert code that checks for those requirements early on in your template, before it delves so deep that the error messages your users get become unintelligibly.
The latter can be quite complicated (static_assert to the rescue!) or even impossible, which is the reason concepts where considered to become a core-language feature, instead of a library.
Note that it is easy to overlook a requirement this way, which will only become apparent when someone uses a type as a template parameter that won't work. However, it is at least as easy to overlook that requirements are often quite lose and put more into the description than what the code actually calls for.
For example, + is defined not only for numbers, but also for std::string and for any number of user-defined types. Conesequently, a template add<T> might not only be used with numbers, but also with strings and an infinite number of user-defined types. Whether this is an unwanted side-effect of the code you want to suppress or a feature you want to support is up to you. All I'm saying is that it is not easy to catch this.
I don't think defining an interface in the form of an abstract base class with virtual functions is a good idea. This is run-time polymorphism, a main pillar classic OO. If you do this, then you don't need a template, just take the base class per reference.
But then you also lose one of the main advantages of templates, which is that they are, in some ways, more flexible (try to write an add() function classic OO which works with any type overloading + in) and faster, because the binding of the function calls take place not at run-time, but during compilation. (That brings more than it might look like at first due to the ability to inline, which usually isn't possible with run-time polymorphism.)