If I have a function template that has default argument for its template parameter and that function takes a non-default parameter of the type parameter then what's the point in the language to allow that default argument that'll never be used? :
template <class T = int>
void foo(T x){cout << x << endl;}
int main()
{
foo("hi"); // T is char const *
foo(); // error
}
As you can see T=int can never be used because the function doesn't have a default parameter thus the compiler in this context always deduces the type of T from the argument passed to foo.
But it can be used. Here's an example.
auto* foo_ptr = &foo<>; // The default template argument is used.
A function call expression is not the only context where a function template's arguments need to be figured out.
Although default parameters are usually used for non deduced parameters, taking the address of the function (&foo) uses them too.
Another example:
#include <typeinfo>
#include <iostream>
using namespace std;
template <class T = int>
void coutType() {
cout << typeid(T).name() << endl;
}
int main() {
// default
coutType();
// non-default
coutType<double>();
}
Output with Clang++
int
double
what's the point in the language to allow [X]?
Better: what would be the point in prohibiting [X]?
There is value in simplicity. I would give the burden of proof to the side that wants to make the language more complicated. The language allows a template parameter to have a default value. The language allows a template parameter to be deduced when the function is directly invoked. It is simpler to allow these to co-exist than to add a prohibition against using both. Hence I would ask why prohibit, rather than ask why allow. If there is no compelling reason for the complication, then stick to simple. Being allowed to do something does not force one to do it. And maybe someone (like StoryTeller and Dani) will find a use for that something.
Of course, simplicity is not the ultimate criterion. If harm would come from [X], then that would likely outweigh simplicity concerns. Complications can be justified. However, complications should not be introduced just because something seems useless.
On the other hand, one could reasonably ask if [X] can be put to use. And maybe that was the real question, even if the OP did not realize it. Still, I thought I would put up one answer addressing the question-as-phrased.
Related
I understand that with concepts, a constrained function (regardless how "loose" the constraint actually is) is always a better match than an unconstrained function. But is there any syntax to selectively call the unconstrained version of f() as in the sample code below? If not, would it be a good idea for compilers to warn about uncallable functions?
#include <iostream>
template <typename T> requires(true)
void f() { std::cout << "Constrained\n"; }
template <typename T>
void f() { std::cout << "NOT Constrained\n"; }
int main() {
f<int>();
}
https://godbolt.org/z/n164aTvd3
Different overloads of a function are meant to all do the same thing. They may do it in different ways or on different kinds of objects, but at a conceptual level, all overloads of a function are supposed to do the same thing.
This includes constrained functions. By putting a constrained overload in a function overload set, you are declaring that this is a valid alternative method for doing what that overload set does. As such, if the constraint matches, then that's the function that should be called. Just like for parameters of different types in regular function overloading.
If you want to explicitly call an overload hidden by a constraint, you have already done something wrong in your design. Or more specifically, if some overload is completely hidden by one or more constrained overloads, you clearly have one more overload than you actually needed.
If the constraints match, the caller should be 100% fine with getting the constrained overload. And if this isn't the case, your design has a problem.
So no, there is no mechanism to do this. Just as there's no mechanism to bypass an explicitly specialized template and use the original version if your template arguments match the specialization.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
AFAIK overloading a function for types that are relative by conversion or a function call that needs some cast applied to the argument to match a best a match are of bad design.
void foo(int)
{
std::cout << "foo(int)\n";
}
void foo(float)
{
std::cout << "foo(float)\n";
}
int main()
{
foo(5.3);// ambiguous call
foo(0u); // ambiguous call
}
Because 5.3 is of type double so it can be equally converted to either float or int thus there are more than best match for the call consequently the call is ambiguous. In the second call the same thing: 0u is of type unsigned int which can be converted to int or float equally thus the call is ambiguous.
To disambiguate the calls I can use a an explicit cast:
foo(static_cast<float>(5.3)); // calls foo(float)
foo(static_cast<int>(0u)); // calls foo(int)
The code now works but it is a bad design because that breaks the principle of Function overloading where the compiler is the responsible for choosing the best match function for a call depending on the arguments passed it.
Until here I'm OK. But what about templates argument deduction? :
*The compiler applies only a few conversions on the arguments passed in to function template call to deduce the type of template arguments.
So the compiler doesn't apply arithmetic conversion nor integral promotion but instead it often generates a new version that best math the call:
template <typename T>
void foo(T)
{
std::cout << "foo(" << typeid(T).name() << ")\n";
}
int main()
{
foo(5.3); // calls foo(double)
foo(0u); // calls foo(unsigned)
}
Now it works fine: the compiler generates two versions of foo one with double and the second with unsigned.
The thing that matters me: Is it a bad idea to pass arguments of related types into a function template that uses template argument deduction for its arguments?
Or the problem is in the language itself? Because the compiler generates versions that can be relative by conversion?
In your first program containing the overload set of foo with int and float, the language rules say that calls like:
foo(5.3);
foo(0u);
are ambiguous. Overload resolution rules say that in both those calls, the conversions needed for the function arguments to match the parameters are tied, resulting in both candidates being equally good matches.
Your solution using a single function template could work, depending on your use case. One potential issue is that every single call to foo with a different argument type will result in a completely different instantiation of foo. Another is that this function template will accept any argument type for which the definition of foo is valid. Neither of these behaviors may be what you desire.
Based on your overload set in the first program, and the static_casts in your code to explicitly call specific versions, it seems that you actually want to group int and unsigned, i.e integral types into one category, and float and double, i.e. floating point types into another. You could implement this in a number of different ways, e.g. by providing an exhaustive overload set that covers all the types that you care about.
The approach I would recommend is to implement the overload set with two function templates, each allowing one family of types. From C++20, you could implement it like this:
void foo(std::integral auto)
{
std::cout << "foo(integral)\n";
}
void foo(std::floating_point auto)
{
std::cout << "foo(floating_point)\n";
}
You can also achieve the same effect before C++20, with a little more syntax, and using SFINAE and other template meta-programming techniques.
Here's a demo.
Or the problem is in the language itself? Because the compiler generates versions that can be relative by conversion?
Okay, they are related by conversion. Now which one should the compiler generate? What deterministic algorithm do you propose it employ to choose the best function to instantiate? Should it parse the entire translation unit first to figure out the best match, or should it still parse top to bottom and keep a "running best" function?
Now, assuming those questions are answered. What happens when you modify your code a bit? What happens if you include a header that instantiates an even better function? You didn't really change your code, but its behavior is still altered, possibly very drastically.
Considering the design headache it is for the language, and the potential chaos this behavior can bring unto unsuspecting code, it'd be a very bad idea to try and make compilers do this.
So no, it's not a language problem. The current behavior is really the sanest choice, even if it's not always what we want, it's something we can learn to expect.
The thing that matters me[sic]: Is it a bad idea to pass arguments of related types into a function template that uses template argument deduction for its arguments?
There's no way to answer it generally for all cases. There are no silver bullets. It could be exactly what your overload set needs to do. Or it could be that you need to build a more refined set of function(s) (templates) that interact with overload resolution via SFINAE or more modern techniques. For instance, you could do this in C++20
template <std::integral I>
void foo(I)
{
//
}
template <std::floating_point F>
void foo(F)
{
//
}
The concepts constrain each template to work only with a specific family of types. That's one way to build the overload set you wanted in your first example, avoid the ambiguity, and work with exact types as templates are designed.
How can I in C++ make a function accept every Object, so I can give it numbers, String or other Objects. I am not very well in C++, I hope it's not a totally stupid question...
Edit: Ok, an example: if you want to try to wrap the std::cout streams into normal functions, that funtion should be able to accept everything - from Integers over Floats to complex Objects. I hope it's more clear now!
You can overload your function for different types, i.e.
size_t func(int);
size_t func(std::string);
Alternatively and/or additionally, you can provide a function template, which is a way to tell the compiler how to generate your function for any particular type, for example
template<typename T>
size_t func(T const&) { return sizeof(T); }
You may use more advanced techniques such as SFINAE to effectively overload those template functions, i.e. to use different templates for different kind of types T (i.e. integral types, pointer, built-in types, pod, etc). The compiler will then pick the best-fitting func() (if any) for any function call it encounters and, if this is a template, generate an appropriate function.
This requires no re-coding.
A completely different approach is to use a generic erasure type, such as boost::any, when the function will need to resolve the expected types at coding-time (as opposed to compile-time):
size_t func(boost::any const&x)
{
auto i = boost::any_cast<const int*>(x);
if(i) return func(*i);
// etc for other types, but this must be done at coding time!
}
You can use templates for this purpose:
template <typename T>
void foo(T const & value)
{
// value is of some type T, which can be any type at all.
}
What you can actually do with the value may be rather limited without knowing its type -- it depends on the goal of your function. (If someone attempts to call the function with an argument type that causes that function specialization to be ill-formed then your template function will fail to instantiate and it will be a compile-time error.)
I'm not sure what you're trying to accomplish, but you can pass a void pointer as a parameter.
void foo(void* bar);
If I understood you correctly you might wanna try using templates http://en.cppreference.com/w/cpp/language/function_template
You are probably looking for templates.
I suggest you read this.
I can write a templated function this way
template<class T> void f(T x) {…}
or this way
template<class T> void f(T const& x) {…}
I guess that the second option can be more optimal as it explicitly avoids a copy, but I suspect that it can also fail for some specific types T (eg functors?).
So, when should use the first option, and when to use the second? There are also this boost::call_traits<T>::param_type and boost::reference_wrapper that were in the answers to my previous question, but people don't use them everywhere, do they? Is there a rule of thumb for this? Thanks.
Is there a rule of thumb for this?
The same general rules for when to use pass by reference vs. pass by value apply.
If you expect T always to be a numeric type or a type that is very cheap to copy, then you can take the argument by value. If you are going to make a copy of the argument into a local variable in the function anyway, then you should take it by value to help the compiler elide copies that don't really need to be made.
Otherwise, take the argument by reference. In the case of types that are cheap to copy, it may be more expensive but for other types it will be faster. If you find this is a performance hotspot, you can overload the function for different types of arguments and do the right thing for each of them.
I suspect that it can also fail for some specific types
Pass by reference-to-const is the only passing mechanism that "never" fails. It does not pose any requirements on T, it accepts both lvalues and rvalues as arguments, and it allows implicit conversions.
Thou shalt not wake the dead, but head a similar problem and here's some example code that shows how to use C++11s type traits to deduce whether a parameter should be passed by value or reference:
#include <iostream>
#include <type_traits>
template<typename key_type>
class example
{
using parameter_type = typename std::conditional<std::is_fundamental<key_type>::value, key_type, key_type&>::type;
public:
void function(parameter_type param)
{
if (std::is_reference<parameter_type>::value)
{
std::cout << "passed by reference" << std::endl;
} else {
std::cout << "passed by value" << std::endl;
}
}
};
struct non_fundamental_type
{
int one;
char * two;
};
int main()
{
int one = 1;
non_fundamental_type nft;
example<int>().function(one);
example<non_fundamental_type>().function(nft);
return 0;
}
Hope it helps others with a similar issue.
Besides what James McNellis wrote, I just want to add that you can specialize your template for reference types (for example like this)
boost::traits has a type trait that selects the "best" type, based on T:
call_traits<T>::param_type
As already mentioned, there are no template-specific issues.
A few days I literally discovered a behaviour of C++, where template arguments are automatically inserted, as shown in this example (nonsensical, only used to show what I mean):
#include <iostream>
template<typename Type> void setVar(Type& subj, const Type& in)
{
subj = static_cast<Type>(in);
}
int main()
{
int foo;
setVar(foo, 42);
std::cout << foo << std::endl;
}
My questions:
What is this behaviour called?
are there special rules when and why templates can be automatically inserted?
What is this behaviour called?
Template argument deduction.
are there special rules when and why templates can be automatically inserted?
You cannot say like templates are inserted. Rather the types of parameters are automatically deduced from the arguments. When and how? That's what TAD is all about.
Check out section 14.8.2 in C++03
It's called template argument deduction, and of course there are special rules. Many rules. In 14.8.2 of the standard [temp.deduct].
The summary version is that if there's a set of template arguments which allows the function to be called, then it will be called with those arguments. The complication is exactly what's allowed, and how to choose between possible alternatives.