What C++20 change to reverse_iterator is breaking this code? - c++

The following code compiles in C++11, C++14, and C++17, but does not compile in C++20. What change to the standard broke this code?
#include <vector>
#include <utility>
template<typename T>
struct bar
{
typename T::reverse_iterator x;
};
struct foo
{
bar<std::vector<std::pair<foo, foo>*>> x;
};
int main()
{
foo f;
}
The error is quite long, but can be summarized as:
template argument must be a complete class

This was always undefined. [res.on.functions]/2.5 says:
In particular, the effects are undefined in the following cases:
[...]
If an incomplete type ([basic.types]) is used as a template argument when instantiating a template component or evaluating a concept, unless specifically allowed for that component.
std::pair does not (and cannot) support incomplete types. You were just relying on order of instantiation to kind of get around that. Something changed in the library that slightly changed the evaluation order, leading to the error. But undefined behavior is undefined - it happened to work before and it happens to not work now.
As to why it's specifically C++20 that is causing this to fail. In C++20, iterators changed to have this new iterator_concept idea. In order to instantiate that, reverse_iterator needs to determine what the concept should be. That looks like this:
#if __cplusplus > 201703L && __cpp_lib_concepts
using iterator_concept
= conditional_t<random_access_iterator<_Iterator>,
random_access_iterator_tag,
bidirectional_iterator_tag>;
using iterator_category
= __detail::__clamp_iter_cat<typename __traits_type::iterator_category,
random_access_iterator_tag>;
#endif
Now, in the process of checking random_access_iterator, the root of iterator concept hierarchy is wonderfully named input_or_output_iterator, specified in [iterator.concept.iterator]:
template<class I>
concept input_or_output_iterator =
requires(I i) {
{ *i } -> can-reference;
} &&
weakly_incrementable<I>;
So, we have to do *i on our iterator type. That's __gnu_cxx::__normal_iterator<std::pair<foo, foo>**, std::vector<std::pair<foo, foo>*> > , in this case. Now, *i triggers ADL - because of course it does. And ADL requires instantiation of all the associated types - because those associated types might have injected friends that could be candidates!
This, in turn, requires instantiating pair<foo, foo> - because, we have to check. And then that ultimately fails in this specific case because instantiating a type requires instantiating all of the type's special member functions, and the way that libstdc++ implements conditional assignment for std::pair is using Eric Fisellier's trick:
_GLIBCXX20_CONSTEXPR pair&
operator=(typename conditional<
__and_<is_copy_assignable<_T1>,
is_copy_assignable<_T2>>::value,
const pair&, const __nonesuch&>::type __p)
{
first = __p.first;
second = __p.second;
return *this;
}
And is_copy_assignable requires complete types and we don't have one.
But really even if pair used concepts to check in this case, that would still involve instantiating the same type traits, so we'd ultimately end up in the same position.
Moral of the story is, undefined behavior is undefined.

Related

Why is deleting a function necessary when you're defining customization point object?

From libstdc++ <concepts> header:
namespace ranges
{
namespace __cust_swap
{
template<typename _Tp> void swap(_Tp&, _Tp&) = delete;
From MS-STL <concepts> header:
namespace ranges {
namespace _Swap {
template <class _Ty>
void swap(_Ty&, _Ty&) = delete;
I've never encountered = delete; outside the context where you want to prohibit the call to copy/move assignment/ctor.
I was curious if this was necessary, so I've commented out the = delete; part from the library like this:
// template<typename _Tp> void swap(_Tp&, _Tp&) = delete;
to see if the following test case compiles.
#include <concepts>
#include <iostream>
struct dummy {
friend void swap(dummy& a, dummy& b) {
std::cout << "ADL" << std::endl;
}
};
int main()
{
int a{};
int b{};
dummy c{};
dummy d{};
std::ranges::swap(a, b);
std::ranges::swap(c, d); // Ok. Prints "ADL" on console.
}
Not only it compiles, it seems to behave well by calling user defined swap for struct dummy. So I'm wondering,
What does template<typename _Tp> void swap(_Tp&, _Tp&) = delete; exactly do in this context?
On what occasion does this break without template<typename _Tp> void swap(_Tp&, _Tp&) = delete;?
TL;DR: It's there to keep from calling std::swap.
This is actually an explicit requirement of the ranges::swap customization point:
S is (void)swap(E1, E2) if E1 or E2 has class or enumeration type ([basic.compound]) and that expression is valid, with overload resolution performed in a context that includes this definition:
template<class T>
void swap(T&, T&) = delete;
So what does this do? To understand the point of this, we have to remember that the ranges namespace is actually the std::ranges namespace. That's important because a lot of stuff lives in the std namespace. Including this, declared in <utility>:
template< class T >
void swap( T& a, T& b );
There's probably a constexpr and noexcept on there somewhere, but that's not relevant for our needs.
std::ranges::swap, as a customization point, has a specific way it wants you to customize it. It wants you to provide a swap function that can be found via argument-dependent lookup. Which means that ranges::swap is going to find your swap function by doing this: swap(E1, E2).
That's fine, except for one problem: std::swap exists. In the pre-C++20 days, one valid way of making a type swappable was to provide a specialization for the std::swap template. So if you called std::swap directly to swap something, your specializations would be picked up and used.
ranges::swap does not want to use those. It has one customization mechanism, and it wants you to very definitely use that mechanism, not template specialization of std::swap.
However, because std::ranges::swap lives in the std namespace, unqualified calls to swap(E1, E2) can find std::swap. To avoid finding and using this overload, it poisons the overload by making visible a version that is = deleted. So if you don't provide an ADL-visible swap for your type, you get a hard error. A proper customization is also required to be more specialized (or more constrained) than the std::swap version, so that it can be considered a better overload match.
Note that ranges::begin/end and similar functions have similar wording to shut down similar problems with similarly named std:: functions.
There were two motivations for the poison pill overloads, most of which don't actually exist anymore but we still have them anyway.
swap / iter_swap
As described in P0370:
The Ranges TS has another customization point problem that N4381 does not cover: an implementation of the Ranges TS needs to co-exist alongside an implementation of the standard library. There’s little benefit to providing customization points with strong semantic constraints if ADL can result in calls to the customization points of the same name in namespace std. For example, consider the definition of the single-type Swappable concept:
namespace std { namespace experimental { namespace ranges { inline namespace v1 {
template <class T>
concept bool Swappable() {
return requires(T&& t, T&& u) {
(void)swap(std::forward<T>(t), std::forward<T>(u));
};
}
}}}}
unqualified name lookup for the name swap could find the unconstrained swap in namespace std either directly - it’s only a couple of hops up the namespace hierarchy - or via ADL if std is an associated namespace of T or U. If std::swap is unconstrained, the concept is “satisfied” for all types, and effectively useless. The Ranges TS deals with this problem by requiring changes to std::swap, a practice which has historically been forbidden for TSs. Applying similar constraints to all of the customization points defined in the TS by modifying the definitions in namespace std is an unsatisfactory solution, if not an altogether untenable.
The Range TS was built on C++14, where std::swap was unconstrained (std::swap didn't become constrained until P0185 was adopted for C++17), so it was important to make sure that Swappable didn't just trivially resolve to true for any type that had std as an associated namespace.
But now std::swap is constrained, so there's no need for the swap poison pill.
However, std::iter_swap is still unconstrained, so there is a need for that poison pill. However, that one could easily become constrained and then we would again have no need for a poison pill.
begin / end
As described in P0970:
For the sake of compatibility with std::begin and ease of migration, std::experimental::ranges::begin accepted rvalues and treated them the same as const lvalues. This behavior was deprecated because it is fundamentally unsound: any iterator returned by such an overload is highly likely to dangle after the full expression that contained the invocation ofbegin
Another problem, and one that until recently seemed unrelated to the design of begin, was that algorithms that return iterators will wrap those iterators in std::experimental::ranges::dangling<>if the range passed to them is an rvalue. This ignores the fact that for some range types — P0789’s subrange<>, in particular — the iterator’s validity does not depend on the range’s lifetime at all. In the case where a prvalue subrange<> is passed to an algorithm, returning a wrapped iterator is totally unnecessary.
[...]
We recognized that by removing the deprecated default support for rvalues from the range access customization points, we made design space for range authors to opt-in to this behavior for their range types, thereby communicating to the algorithms that an iterator can safely outlive its range type. This eliminates the need for dangling when passing an rvalue subrange, an important usage scenario.
The paper went on to propose a design for safe invocation of begin on rvalues as a non-member function that takes, specifically, an rvalue. The existence of the:
template <class T>
void begin(T&&) = delete;
overload:
gives std2::begin the property that, for some rvalue expression E of type T, the expression std2::begin(E) will not compile unless there is a free function begin findable by ADL that specifically accepts rvalues of type T, and that overload is prefered by partial ordering over the general void begin(T&&) “poison pill” overload.
For example, this would allow us to properly reject invoking ranges::begin on an rvalue of type std::vector<int>, even though the non-member std::begin(const C&) would be found by ADL.
But this paper also says:
The author believed that to fix the problem with subrange and dangling would require the addition of a new trait to give the authors of range types a way to say whether its iterators can safely outlive the range. That felt like a hack, and that feeling was reinforced by the author’s inability to pick a name for such a trait that was sufficiently succint and clear.
Since then, this functionality has become checked by a trait - which was first called enable_safe_range (P1858) and is now called enable_borrowed_range (LWG3379). So again, the poison pill here is no longer necessary.

"Materializing" an object of a known type for C++ type inference

The following code does everything I need from it in every C++11 and later compiler I have tried. So, in practice, for my purposes, it works (and is expected to work, at least on Linux, in the foreseeable future, due to historical reasons). However, from a language lawyer perspective, this code is invalid, as it contains a construct (dereference of a pointer to a nonexistent object) that is formally an UD, even though this dereference is actually never executed.
#include <type_traits>
#include <iostream>
namespace n1 {
struct int_based { int base = 0; };
inline int get_base(int_based ib) { return ib.base; }
}
namespace n2 {
struct double_based;
double get_base(const double_based&);
}
template <typename T>
using base_type = decltype((get_base(*(T*)nullptr)));
int main() {
auto isInt = std::is_same<base_type<n1::int_based>, int>::value;
auto isDouble = std::is_same<base_type<n2::double_based>, double>::value;
auto unlike = std::is_same<base_type<n1::int_based>, double>::value;
std::cout << isInt << isDouble << unlike << std::endl;
return 0;
}
The code does Koenig lookup for type mapping and infers the mapped type using function signatures that I would not want to change. In this example, the double_based type is incomplete; in my real use cases, the types are expected to be complete but are not guaranteed to be DefaultConstructible. The actual code is a part of a type-safe serialization logic.
The question is: is there a standard-compliant way of "materializing" an object of the template parameter type T for use in decltype in this code, or is it impossible to have such standard-complying type mapping without having a preconstructed object of the source type?
Replacing function parameters with pointers to the objects is ugly and doesn't really solve the problem, as it is unclear what these functions need to do with the nullptr argument in the general case without introducing yet another UB.
What you are looking for is std::declval. It is a function that returns the type you give it so you can work with an object of that type in a unevaluated context. That turns
template <typename T>
using base_type = decltype((get_base(*(T*)nullptr)));
into
template <typename T>
using base_type = decltype((get_base(std::declval<T>())));
Do note that there is no requirement of having a definition for std::declval. If you try to use
T foo = std::declval<T>();
then your program is ill-formed.
Actually, this code is fine. (It is true that it is UB to dereference a null pointer and bind the result to a reference, or to dereference a null pointer and access the resulting lvalue. However, when the execution of the program does not actually evaluate these constructs, there is no UB.)
But it is true that std::declval<T>() is the preferred idiom. Unlike the null pointer trick, std::declval<T>() is "safe": if you accidentally use it in a potentially evaluated context, there will be a compile-time error. It is also much less ugly.
You might use std::declval:
template <typename T>
using base_type = decltype((get_base(std::declval<T>())));

How can I check that assignment of const_reverse_iterator to reverse_iterator is invalid?

Consider the following:
using vector_type = std::vector<int>;
using const_iterator = typename vector_type::const_iterator;
using const_reverse_iterator = typename vector_type::const_reverse_iterator;
using iterator = typename vector_type::iterator;
using reverse_iterator = typename vector_type::reverse_iterator;
int main()
{
static_assert(!std::is_assignable_v<iterator, const_iterator>); // passes
static_assert(!std::is_assignable_v<reverse_iterator, const_reverse_iterator>); // fails
static_assert(std::is_assignable_v<reverse_iterator, const_reverse_iterator>); // passes
}
I can check that assignment of iterator{} = const_iterator{} is not valid, but not an assignment of reverse_iterator{} = const_reverse_iterator{} with this type trait.
This behavior is consistent across gcc 9.0.0, clang 8.0.0, and MSVC 19.00.23506
This is unfortunate, because the reality is that reverse_iterator{} = const_reverse_iterator{} doesn't actually compile with any of the above-mentioned compilers.
How can I reliably check such an assignment is invalid?
This behavior of the type trait implies that the expression
std::declval<reverse_iterator>() = std::declval<const_reverse_iterator>()
is well formed according to [meta.unary.prop], and this appears consistent with my own attempts at an is_assignable type trait.
This trait passes because the method exists that can be found via overload resolution and it is not deleted.
Actually calling it fails, because the implementation of the method contains code that isn't legal with those two types.
In C++, you cannot test if instantiating a method will result in a compile error, you can only test for the equivalent of overload resolution finding a solution.
The C++ language and standard library originally relied heavily on "well, the method body is only compiled in a template if called, so if the body is invalid the programmer will be told". More modern C++ (both inside and outside the standard library) uses SFINAE and other techniques to make a method "not participate in overload resolution" when its body would not compile.
The constructor of reverse iterator from other reverse iterators is the old style, and hasn't been updated to "not participate in overload resolution" quality.
From n4713, 27.5.1.3.1 [reverse.iter.cons]/3:
template<class U> constexpr reverse_iterator(const reverse_iterator<U>& u);
Effects: Initializes current with u.current.
Notice no mention of "does not participate in overload resolution" or similar words.
The only way to get a trait you want like this would be to
Change (well, fix) the C++ standard
Special case it
I'll leave 1. as an exercise. For 2., you know that reverse iterators are templates over forward iterators.
template<class...Ts>
struct my_trait:std::is_assignable<Ts...> {};
template<class T0, class T1>
struct my_trait<std::reverse_iterator<T0>, std::reverse_iterator<T1>>:
my_trait<T0, T1>
{};
and now my_trait is is_assignable except on reverse iterators, where it instead tests assignability of the contained iterators.
(As extra fun, a reverse reverse iterator will work with this trait).
I once had to do something very similar with std::vector<T>::operator<, which also blindly called T<T and didn't SFINAE disable it if that wasn't legal.
It may also be the case that a C++ standard library implementation can make a constructor which would not compile not participate in overload resolution. This change could break otherwise well formed programs, but only by things as ridiculous as your static assert (which would flip) or things logically equivalent.
It's just a question of constraints. Or lack thereof. Here's a reduced example with a different trait:
struct X {
template <typename T>
X(T v) : i(v) { }
int i;
};
static_assert(is_constructible_v<X, std::string>); // passes
X x("hello"s); // fails
Whenever people talk about being SFINAE-friendly - this is fundamentally what they're referring to. Ensuring that type traits give the correct answer. Here, X claims to be constructible from anything - but really it's not. is_constructible doesn't actually instantiate the entire construction, it just checks the expression validity - it's just a surface-level check. This is the problem that enable_if and later Concepts are intended to solve.
For libstdc++ specifically, we have:
template<typename _Iter>
_GLIBCXX17_CONSTEXPR
reverse_iterator(const reverse_iterator<_Iter>& __x)
: current(__x.base()) { }
There's no constraint on _Iter here, so is_constructible_v<reverse_iterator<T>, reverse_iterator<U>> is true for all pairs T, U, even if it's not actually constructible. The question uses assignable, but in this case, assignment would go through this constructor template, which is why I'm talking about construction.
Note that this is arguably a libstdc++ bug, and probably an oversight. There's even a comment that this should be constrained:
/**
* A %reverse_iterator across other types can be copied if the
* underlying %iterator can be converted to the type of #c current.
*/

std::iterator_traits divergence between libstdc++ and libc++

Given:
struct Iter {
using value_type = int;
using difference_type = int;
using reference = int;
using pointer = int;
using iterator_category = int;
};
The following works fine with libstc++, but fails to compile against libc++ 5.0.0:
#include <iterator>
#include <type_traits>
static_assert(
std::is_same<
std::iterator_traits<Iter>::iterator_category,
Iter::iterator_category
>::value, "");
With the error:
error: no member named 'iterator_category' in 'std::__1::iterator_traits<Iter>'
std::is_same<std::iterator_traits<Iter>::iterator_category, Iter::iterator_category>::value, "");
The static assertion succeeds if Iter::iterator_category is one of the standard input categories, e.g. std::input_iterator_tag.
IMHO it shouldn't fail, because the C++ draft states in [iterator.traits]#2:
If Iterator has valid ([temp.deduct]) member types difference_­type, value_­type, pointer, reference, and iterator_­category, iterator_­traits<Iterator> shall have the following as publicly accessible members:
using difference_type = typename Iterator::difference_type;
using value_type = typename Iterator::value_type;
using pointer = typename Iterator::pointer;
using reference = typename Iterator::reference;
using iterator_category = typename Iterator::iterator_category;
Otherwise, iterator_­traits<Iterator> shall have no members by any of the above names.
Can anybody please explain whether this is an implementation bug, or why my expectations are wrong?
We also have, in [std.iterator.tags]:
It is often desirable for a function template specialization to find out what is the most specific category of its iterator argument, so that the function can select the most efficient algorithm at compile time. To facilitate this, the library introduces category tag classes which are used as compile time tags for algorithm selection. They are: input_­iterator_­tag, output_­iterator_­tag, forward_­iterator_­tag, bidirectional_­iterator_­tag and random_­acces_­iterator_­tag. For every iterator of type Iterator, iterator_­traits<Iterator>​::​iterator_­category shall be defined to be the most specific category tag that describes the iterator's behavior.
namespace std {
struct input_iterator_tag { };
struct output_iterator_tag { };
struct forward_iterator_tag: public input_iterator_tag { };
struct bidirectional_iterator_tag: public forward_iterator_tag { };
struct random_access_iterator_tag: public bidirectional_iterator_tag { };
}
int isn't one of those tags, so iterator_traits<Iter>::iterator_category can't give you back int. I would suggest that having an invalid iterator category is simply violating the preconditions of iterator_traits - this doesn't necessarily mean that the library must fail, but it also doesn't mean that failure is a library bug.
However, these preconditions aren't spelled out as explicitly in [iterators] as they are in other parts of the library section. So I'd be inclined to suggest that both libraries are correct, but libc++'s approach to not defining any member aliases in iterator_traits<Iter> is likely better.

Determining if ::std::numeric_limits<T> is safe to instantiate

The class template ::std::numeric_limits<T> may only be instantiated for types T, which can be the return value of functions, since it always defines member functions like static constexpr T min() noexcept { return T(); } (see http://www.cplusplus.com/reference/limits/numeric_limits/ for more information of the non-specialised versions in c++03 or c++11).
If T is i.e. int[2] the instantiation will immediately lead to a compile time error, since int[2] cannot be the return value of a function.
Wrapping ::std::numeric_limits with a safe version is easy - if a way to determine if it is safe to instantiate ::std::numeric_limits is known. This is necessary, since the problematic functions should be accessible if possible.
The obvious (and obviously wrong) way of testing ::std::numeric_limits<T>::is_specialised does not work since it requires instantiation of the problematic class template.
Is there a way to test for safety of instantiation, preferably without enumerating all known bad types? Maybe even a general technique to determine if any class template instantiation is safe?
Concerning the type trait that decides whether a type can be returned for a function, here is how I would go about it:
#include <type_traits>
template<typename T, typename = void>
struct can_be_returned_from_function : std::false_type { };
template<typename T>
struct can_be_returned_from_function<T,
typename std::enable_if<!std::is_abstract<T>::value,
decltype(std::declval<T()>(), (void)0)>::type>
: std::true_type { };
On the other hand, as suggested by Tom Knapen in the comments, you may want to use the std::is_arithmetic standard type trait to determine whether you can specialize numeric_limits for a certain type.
Per paragraph 18.3.2.1/2 of the C++11 Standard on the numeric_limits class template, in fact:
Specializations shall be provided for each arithmetic type, both floating point and integer, including bool.
The member is_specialized shall be true for all such specializations of numeric_limits.
C++11 solutions
Since T only appears as the return type of static member functions in the declarations of the unspecialised ::std::numeric_limits<T> (see C++03 18.2.1.1 and C++11 18.3.2.3), it is enough for this specific problem to ensure that doing so is declaration-safe.
The reason this leads to a compile time error is, that the use of a template-argument may not give rise to an ill-formed construct in the instantiation of the template specialization (C++03 14.3/6, C++11 14.3/6).
For C++11 enabled projects, Andy Prowl's can_be_returned_from_function solution works in all relevant cases: http://ideone.com/SZB2bj , but it is not easily portable to a C++03 environment. It causes an error in when instantiated with an incomplete type ( http://ideone.com/k4Y25z ). The proposed solution will accept incomplete classes instead of causing an error. The current Microsoft compiler (msvc 1700 / VS2012) seems to dislike this solution and fail to compile.
Jonathan Wakely proposed a solution that works by utilizing std::is_convertible<T, T> to determine if T can be the return value of a function. This also eliminates incomplete classes, and is easy to show correct (it is defined in C++11 to do exactly what we want). Execution shows that all cases (arrays, arrays of undefined length, functions, abstract classes) which are known to be problematic are correctly recognized. As a bonus, it also correctly recognizes incomplete classes, which are not allowed as parameters to numeric_limits by the standards (see below), although they seem to cause no problems in practice, as long as no problematic functions are actually called. Test execution: http://ideone.com/zolXpp . Some current compilers (icc 1310 and msvc 1700, which is VS2012's compiler) generate incorrect results with this method.
Tom Knapen's is_arithmetic solution is a very concise C++11 solution, but requires the implementer of a type that specialises numeric_limits to also specialise is_arithmetic. Alternatively, a type that in its base case inherits from is_arithmetic (this type might be called numeric_limits_is_specialised) might be specialised in those cases, since specialising is_abstract might not be semantically correct (e.g. a type that does not specify all basic arithmetic operators, but still is a valid integer-like type).
This whitelisting approach ensures that even incomplete types are handled correctly, unless someone maliciously tries to force compilation errors.
Caveat
As shown by the mixed results, C++11 support remains spotty, even with current compilers, so your mileage with these solutions may vary. A C++03 solution will benefit from more consistent results and the ability to be used in projects that do not wish to switch to C++11.
Towards a robust C++03 solution
Paragraph C++11 8.3.5/8 lists the restrictions for return values:
If the type of a parameter includes a type of the form "pointer to array of unknown bound of T" or "reference to array of unknown bound of T", the program is ill-formed. Functions shall not have a return type of type array or function, although they may have a return type of type pointer or reference to such things. There shall be no arrays of functions, although there can be arrays of pointers to functions.
and goes on in paragraph C++11 8.3.5/9:
Types shall not be defined in return or parameter types. The type of a parameter or the return type for a function definition shall not be an incomplete class type (possibly cv-qualified) unless the function definition is nested within the member-specification for that class (including definitions in nested classes defined within the class).
Which is pretty much the same as paragraph C++03 8.3.5/6:
If the type of a parameter includes a type of the form "pointer to array of unknown bound of T" or "reference to array of unknown bound of T", the program is ill-formed. Functions shall not have a return type of type array or function, although they may have a return type of type pointer or reference to such things. There shall be no arrays of functions, although there can be arrays of pointers to functions. Types shall not
be defined in return or parameter types. The type of a parameter or the return type for a function definition shall not be an incomplete class type (possibly cv-qualified) unless the function definition is nested within the member-specification for that class (including definitions in nested classes defined within the class).
Another kind of problematic types is mentioned identically in C++11 10.4/3 and C++03 10.4/3:
An abstract class shall not be used as a parameter type, as a function return type, or as the type of an explicit conversion. [...]
The problematic functions are not nested within an incomplete class type (except of ::std::numeric_limits<T>, which cannot be their T), so we have four kinds of problematic values of T: Arrays, functions, incomplete class types and abstract class types.
Array Types
template<typename T> struct is_array
{ static const bool value = false; };
template<typename T> struct is_array<T[]>
{ static const bool value = true; };
template<typename T, size_t n> struct is_array<T[n]>
{ static const bool value = true; };
detects the simple case of T being an array type.
Incomplete Class Types
Incomplete class types interestingly do not lead to a compilation error just from instantiation, which means either the tested implementations are more forgiving than the standard, or I am missing something.
C++03 example: http://ideone.com/qZUa1N
C++11 example: http://ideone.com/MkA0Gr
Since I cannot come up with a proper way to detect incomplete types, and even the standard specifies (C++03 17.4.3.6/2 item 5)
In particular, the effects are undefined in the following cases: [...] if an incomplete type (3.9) is used as a template argument when instantiating a template component.
Adding only the following special allowance in C++11 (17.6.4.8/2):
[...] unless specifically allowed for that component
it seems safe to assume that anybody passing incomplete types as template parameters are on their own.
A complete list of the cases where C++11 allows incomplete type parameters is quite short:
declval
unique_ptr
default_delete (C++11 20.7.1.1.1/1: "The class template default_delete serves as the default deleter (destruction policy) for the class template unique_ptr."
shared_ptr
weak_ptr
enable_shared_from_this
Abstract Class & Function Types
Detecting functions is a bit more work than in C++11, since we do not have variadic templates in C++03. However, the above quotes on functions already contain the hint we need; functions may not be elements of arrays.
Paragraph C++11 8.3.4\1 contains the sentence
T is called the array element type; this type shall not be a reference type, the (possibly cv qualified) type void, a function type or an abstract class type.
which is also verbatim in paragraph C++03 8.3.4\1 and will allow us to test if a type is a function type. Detecting (cv) void and reference types is simple:
template<typename T> struct is_reference
{ static const bool value = false; };
template<typename T> struct is_reference<T&>
{ static const bool value = true; };
template<typename T> struct is_void
{ static const bool value = false; };
template<> struct is_void<void>
{ static const bool value = true; };
template<> struct is_void<void const>
{ static const bool value = true; };
template<> struct is_void<void volatile>
{ static const bool value = true; };
template<> struct is_void<void const volatile>
{ static const bool value = true; };
Using this, it is simple to write a meta function for abstract class types and functions:
template<typename T>
class is_abstract_class_or_function
{
typedef char (&Two)[2];
template<typename U> static char test(U(*)[1]);
template<typename U> static Two test(...);
public:
static const bool value =
!is_reference<T>::value &&
!is_void<T>::value &&
(sizeof(test<T>(0)) == sizeof(Two));
};
Note that the following meta function may be used to distinguish between the two, should one wish to make a distinct is_function and is_abstract_class
template<typename T>
class is_class
{
typedef char (&Two)[2];
template<typename U> static char test(int (U::*));
template<typename U> static Two test(...);
public:
static const bool value = (sizeof(test<T>(0)) == sizeof(char));
};
Solution
Combining all of the previous work, we can construct the is_returnable meta function:
template<typename T> struct is_returnable
{ static const bool value = !is_array<T>::value && !is_abstract_class_or_function<T>::value; };
Execution for C++03 (gcc 4.3.2): http://ideone.com/thuqXY
Execution for C++03 (gcc 4.7.2): http://ideone.com/OR4Swf
Execution for C++11 (gcc 4.7.2): http://ideone.com/zIu7GJ
As expected, all test cases except for the incomplete class yield the correct answer.
In addition to the above test runs, this version is tested (with the exact same test program) to yield the same results w/o warnings or errors on:
MSVC 1700 (VS2012 with and w/o XP profile), 1600 (VS2010), 1500 (VS2008)
ICC Win 1310
GCC (C++03 and C++11/C++0x mode) 4.4.7, 4.6.4, 4.8.0 and a 4.9 snapshot
Restrictions for either case
Note that, while this approach in either version works for any numeric_limits implementation that does not extend upon the implementation shown in the standard, it is by no means a solution to the general problem, and in fact may theoretically lead to problems with weird but standard compliant implementations (e.g. ones which add private members).
Incomplete classes remain a problem, but it seems silly to require higher robustness goals than the standard library itself.
std::is_convertible<T, T>::value will tell you if a type can be returned from a function.
is_convertible<T1, T2> is defined in terms of a function returning a T2 converted from an expression of type T1.
#include <limits>
#include <type_traits>
struct Incomplete;
struct Abstract { virtual void f() = 0; };
template<typename T>
using is_numeric_limits_safe = std::is_convertible<T, T>;
int main()
{
static_assert(!is_numeric_limits_safe<Incomplete>::value, "Incomplete");
static_assert(!is_numeric_limits_safe<Abstract>::value, "Abstract");
static_assert(!is_numeric_limits_safe<int[2]>::value, "int[2]");
}
This might not be exactly what you want, because it is safe to instantiate std::numeric_limits<Incomplete> as long as you don't call any of the functions that return by value. It's not possible to instantiate std::numeric_limits<int[2]> though.
Here's a better test (using SFINAE) which gives is_numeric_limits_safe<Incomplete>::value==true
template<typename T>
class is_numeric_limits_unsafe
{
struct mu { };
template<typename U>
static U test(int);
template<typename U>
static mu test(...);
public:
typedef std::is_same<decltype(test<T>(0)), mu> type;
};
template<typename T>
struct is_numeric_limits_safe
: std::integral_constant<bool, !is_numeric_limits_unsafe<T>::type::value>
{ };