Some Background:
I'm working on putting together a templated class which, as part of template specialization, deduces a type to use for one of its members. This data member needs to support being streamed over the wire, and I'm trying to keep the system as flexible and extensible as possible (with the goal that new variants of the type can be created by modifying some high-level elements of the specialization logic without getting into the guts of the implementation code). Some of the existing usages specialize this data member to be an enum, and the streaming code supports converting this value back and forth to a 32-bit integer for transmission over the wire.
Because an enum could be defined (either implicitly or explicitly) to be backed by a different type -- most dangerous in the case being a 64-bit value -- I'd like to be able enforce that if the resolved type is an enum, its underlying type must be a 32-bit integer (more generally, I just need to enforce that it's a maximum of 32 bits, but I'll be worrying about that complexity once the simpler case is working).
My Attempted Solution:
Stitching together some of the tools provided by type_traits, I came up with the following:
#include <cstdint>
#include <type_traits>
using TestedType = /* Logic for deducing a type */;
constexpr bool nonEnumOrIntBacked =
!std::is_enum_v<TestedType>
|| std::is_same_v<std::underlying_type_t<TestedType>, std::int32_t>;
static_assert(nonEnumOrIntBacked,
"TestedType is an enum backed by something other than a 32-bit integer");
However, when I tried to compile this (using Visual Studio 2017 on the latest update), I was met with the error text
'TestedType': only enumeration type is allowed as an argument to compiler intrinsic type trait '__underlying_type'. Seeing this, I tried an alternative formulation using std::disjunction, which I believe should short-circuit evaluation of templates if an earlier condition evaluates to true (I've ommitted the std qualifiers to make this a bit more readable):
disjunction_v<
negation<is_enum<TestedType>>,
is_same<underlying_type_t<TestedType>, int32_t>
>;
I've also tried wrapping the offending usage of underlying_type_t inside an enable_if predicated on the type being an enum, but didn't have success with that either.
My Question:
Do boolean operators in general (and std::disjunction in particular) not short-circuit evaluation of templates? On the cppreference page for std::disjunction, it states the following (emphasis mine):
Disjunction is short-circuiting: if there is a template type argument Bi with bool(Bi::value) != false, then instantiating disjunction::value does not require the instantiation of Bj::value for j > i
Reading this, I would have expected the ill-formed nature of underlying_type_t<TestedType> for some non-enum type to be irrelevant, since downstream types don't need to be considered once something upstream has been evaluated as true.
If my reading of the spec is incorrect on this point, is there another way to accomplish this check at compile-time, or will I need to add a runtime check to enforce this?
Template arguments are "evaluated" eagerly just like regular arguments. The part causing problems is underyling_type_t, but it's present in full in both versions. You need to delay that part until after you know TestedType is an enum type. This is fairly straightforward with constexpr if (live example):
template<typename T>
constexpr bool nonEnumOrIntBackedImpl() {
if constexpr (std::is_enum_v<T>) {
return std::is_same_v<std::underlying_type_t<T>, std::int32_t>;
} else {
return false;
}
}
template<typename T>
constexpr bool nonEnumOrIntBacked = nonEnumOrIntBackedImpl<T>();
Prior to C++17, one common method is tag dispatching (live example):
template<typename T>
constexpr bool nonEnumOrIntBackedImpl(std::true_type) {
return std::is_same<std::underlying_type_t<T>, std::int32_t>{};
}
template<typename T>
constexpr bool nonEnumOrIntBackedImpl(std::false_type) {
return false;
}
template<typename T>
constexpr bool nonEnumOrIntBacked = nonEnumOrIntBackedImpl<T>(std::is_enum<T>{});
chris explained why the construct failed and gave a solution. But in the spirit of exploiting every single feature of the library, here's how to use the short-circuitry
template<typename T, typename I>
struct is_underlying
{
static constexpr auto value =
std::is_same_v<std::underlying_type_t<T>, I>;
};
using TestedType = int;
constexpr bool nonEnumOrIntBacked =
std::disjunction_v<std::negation<std::is_enum<TestedType>>,
is_underlying<TestedType, std::int32_t>>;
The template needs to be well-formed, but not its value.
An alternative:
template <typename T> struct identity { using type = T; };
bool nonEnumOrIntBacked =
std::is_same<
std::conditional_t<
std::is_enum_v<TestedType>,
std::underlying_type<TestedType>,
identity<void>>::type,
int
>::value
;
with conditional<cont, type1, type2>::type::type :) to delay evaluation.
Related
The syntax that works for classes does not work for concepts:
template <class Type>
concept C = requires(Type t) {
// ...
};
template <class Type>
concept C<Type*> = requires(Type t) {
// ...
};
MSVC says for the line of the "specialization": error C7606: 'C': concept cannot be explicitly instantiated, explicitly specialized or partially specialized.
Why cannot concepts be specialized? Is there a theoretical reason?
Because it would ruin constraint normalization and subsumption rules.
As it stands now, every concept has exactly and only one definition. As such, the relationships between concepts are known and fixed. Consider the following:
template<typename T>
concept A = atomic_constraint_a<T>;
template<typename T>
concept B = atomic_constraint_a<T> && atomic_constraint_b<T>;
By C++20's current rules, B subsumes A. This is because, after constraint normalization, B includes all of the atomic constraints of A.
If we allow specialization of concepts, then the relationship between B and A now depends on the arguments supplied to those concepts. B<T> might subsume A<T> for some Ts but not other Ts.
But that's not how we use concepts. If I'm trying to write a template that is "more constrained" than another template, the only way to do that is to use a known, well-defined set of concepts. And those definitions cannot depend on the parameters to those concepts.
The compiler ought to be able to compute whether one constrained template is more constrained than another without having any template arguments at all. This is important, as having one template be "more constrained" than another is a key feature of using concepts and constraints.
Ironically, allowing specialization for concepts would break (constrained) specialization for other templates. Or at the very least, it'd make it really hard to implement.
In addition to the great answer from Nicol Bolas:
Concepts are a bit special, because they don't behave like other templated things:
13.7.9 Concept definitions
(5) A concept is not instantiated ([temp.spec]).
[Note 1: A concept-id ([temp.names]) is evaluated as an expression. A concept cannot be explicitly instantiated ([temp.explicit]), explicitly specialized ([temp.expl.spec]), or partially specialized ([temp.spec.partial]). — end note]
Due to concepts not being able to be instantiated they also can't be specialized.
I'm not sure on why the standard decided to not make them specializable, given that it's easy to emulate specializations.
While you can't specialize concepts directly, there are quite a few ways you can work around the problem.
You can use any type of constant expression in a concept - so you could use a templated variable (which can be specialized) and just wrap it up into a concept - the standard does this for quite a few of its own concepts as well, e.g. std::is_intergral:
template<class T> struct is_integral;
// is_integral is specialized for integral types to have value == true
// and all others are value == false
template<class T>
inline constexpr bool is_integral_v = is_integral<T>::value;
template<class T>
concept integral = is_integral_v<T>;
So you could easily write a concept that has specializations like this: godbolt example
struct Foo{};
struct Bar{};
template<class T>
constexpr inline bool is_addable_v = requires(T t) {
{ t + t } -> std::same_as<T>;
};
// Specializations (could also use other requires clauses here)
template<>
constexpr inline bool is_addable_v<Foo> = true;
template<class T>
constexpr inline bool is_addable_v<T&&> = true;
template<class T>
concept is_addable = is_addable_v<T>;
int main() {
static_assert(is_addable<int>);
static_assert(is_addable<Foo>);
static_assert(!is_addable<Bar>);
static_assert(is_addable<Bar&&>);
}
Or by using a class:
template<class T>
struct is_addable_v : std::true_type {};
template<>
struct is_addable_v<struct FooBar> : std::false_type {};
template<class T>
concept is_addable = is_addable_v<T>::value;
Or even a constexpr lambda: godbolt example
// pointers must add to int
// everything else must add to double
template<class T>
concept is_special_addable = ([](){
if constexpr(std::is_pointer_v<T>)
return requires(std::remove_pointer_t<T> t) {
{ t + t } -> std::same_as<int>;
};
else
return requires(T t) {
{ t + t } -> std::same_as<double>;
};
})();
int main() {
static_assert(is_special_addable<double>);
static_assert(is_special_addable<int*>);
static_assert(!is_special_addable<double*>);
static_assert(!is_special_addable<int>);
}
So while concepts can't be specialized on their own, it's easy to achieve the same effect with existing language features.
Specialization in this sort of situation opens up a bag of worms. We opened this bag up once with template specialization. Template specialization is a major part of what makes the template language in general Turing complete. Yes, you can program in templates. You shouldn't, but you can. Boost has a library called Boost.MPL that's chock full of clever things, like an "unordered map" that operates at compile time, rather than run time.
So we would have to restrict it carefully. Simple cases may work, but complex cases would have to be forbidden. Certainly anything that is remotely capable of creating a recursive constraint would have to be watched carefully. Indeed, consider a concept:
template <typename T>
concept hailstone = false;
template <int i>
concept hailstone<std::integral_constant<int, i> =
hailstone<2 * i> || (i % 2 == 1 && hailstone<3*i - 1>);
template <>
concept hailstone<std::integral_constant<int, 0> = true;
so, is std::integral_constant<int, 27> a hailstone? It could take a while. My chosen example is based on hailstone numbers from the Collatz Conjecture. Determining whether any given number is a hailstone or not is painfully difficult (even though, as best as we can tell, every number is a hailstone number).
Now replace integral_constant with a clever structure which can do arbitrary precision. Now we're in trouble!
Now we can carefully slice off elements of this problem and mark them as doable. The spec community is not in that business. The Concepts we know in C++20 has been nicknamed concepts-lite because it's actually a drastically simplified version of a concepts library that never made it into C++11. That library effectively implemented a Description Logic, a class of logic that is known to be decidable. This was important because the computer had to run through all of the necessary calculations, and we didn't want them to take an infinite amount of time. Concepts is derived from this, so it follows the same rules. And, if you look in Description Logics, the way you prove many statements involves first enumerating the list of all named concepts. Once you had enumerated that, it was trivial to show that you could resolve any concept requirement in finite time.
As Nicol Bolas points out in his answer, the purpose of concepts was not to be some clever Turing complete system. It was to provide better error messages. Thus, while one might be able to cleverly slide in some specialization within carefully selected paths, there's no incentive to.
° Preamble
This question is particularly related to the helper variable templates defined by/in the STL for all the types deriving from std::integral_constant.
° Context
I am in the process of writing a compile-time only library which aims to provide the most possible features of the STL (until C++17 for now), by using the least possible features of the language after C++11.
That is, everything that can be done using C++11 features only, are implemented using C++11. For things that cannot be implemented like that, the library will have to provide other options...
Side Note
The purpose is to minimize the needed modifications to the code, produced using the library, when this code has to be compiled with compilers having a reduced set of language features. Ie: Compilers of the 'embedded world' does often not provide everything one would them to be able to do.
° Chronology
C++11 standard library came up with std::integral_constant.
This 'helper class' was already defining the cast operator for value_type.
C++14 added the invoke operator to it, and this 'new' language feature 'variable template'.
C++17 added std::bool_constant though std::true_type and std::false_type were already defined from C++11 as std::integral_constant<bool, true and false > respectively.
C++17 also added inline variable template... There, suddenly, all the types deriving from std::integral_constant were all defining a 'helper' variable template.
Note
I perfectly understand what is the purpose of an inline variable template.
The question here is about the usefulness of the 'helpers' defined for the types deriving from std::integral_constant.
° A Bit Food
Now, consider the following code examples:
/* Test template using std::integral_constant<bool, false>
*/
template<typename...>
using AlwaysFalse = std::false_type;
/* Example #1
*/
template<typename T>
struct AssertAlwaysFalse {
static_assert(
AlwaysFalse<T>{},
"Instatiation and bool cast operator replaces variable template."
);
using Type = T;
};
using AlwaysFalseType = typename AssertAlwaysFalse<int>::Type;
/* Example #2
*/
constexpr auto alwaysFalseAuto = AlwaysFalse<int>{};
constexpr bool alwaysFalseBool = AlwaysFalse<int>{};
/* Example #3
*/
template<bool AlwaysF>
struct AlwaysFalseArg { static constexpr bool Result = AlwaysF; };
constexpr bool alwaysFalseArg = AlwaysFalseArg<AlwaysFalse<int>{}>::Result;
The above examples show that instantiating an std::integral_constant, where a value is expected, has the exact same effect one would obtain by using a 'helper' variable template.
This is perfectly natural. std::integral_constant defines the cast operator for value_type. This behavior is purely C++11, and was available way before inline variable template.
° Still Stands The Question
Is there only one good reason for having defined these 'helper' variable templates for all the types deriving from std::integral_constant ???
° In Other Words
After the comment of #NicolBolas about:
"Why instantiating some object only to convert it into a compile-time value ?"
I realized that the main point behind the question was maybe not enough clear. So I will put it like that:
If you only had at disposal the features provided with C++11, How would you implement 'something' to provide this compile-time value ?
The main benefits are compilation speed, consistency, and convenience, primarily. I'm going to take a look at a few things here. I'll try to address both what the features are used for, and how one would implement them with only C++11 features. If you only care about the implementation ideas, skip to the bottom.
integral_constant itself:
First, we have the central object here, std::integral_constant. It defines a compile-time static constant, accessed as std::integral_constant<T, V>::value, and looks something like thistaken from cppreference.
template<class T, T v>
struct integral_constant {
static constexpr T value = v;
using value_type = T;
using type = integral_constant; // using injected-class-name
constexpr operator value_type() const noexcept { return value; }
constexpr value_type operator()() const noexcept { return value; } // since c++14
};
Now, the first thing to note is that integral_constant stores the constant's value within itself, as a compile-time constant. You can access it without instantiating an instance; furthermore, instantiating the integral_constant will typically just result in an object being created and immediately converted to the constant, doing extra work for zero benefit; it's usually better to just use integral_constant::value directly instead.
constexpr bool TraitResult = integral_constant<bool, SomeConstexprTest(param)>::value;
SFINAE, Traits, and bool_constant:
The most common use case for integral_constant, by far, is as a compile-time boolean trait, likely used for SFINAE or introspection. The vast majority of <type_traits> consists of integral_constant<bool>s, with values determined according to the trait's logic, for use as yes-or-no tests.
template<typename T>
typename std::enable_if< std::is_same<SomeType, T>::value>::type
someFunc(T&& t);
template<typename T>
typename std::enable_if< !std::is_same<SomeType, T>::value>::type
someFunc(T&& t);
C++17 supplies bool_constant with this usage in mind, as a cleaner way to create boolean constants. It allows for cleaner code by simplifying the creation of custom traits:
namespace detail {
// These lines are clean.
template<typename...> struct are_unique_helper;
template<typename T> struct are_unique_helper<T> : std::true_type {};
// This... less so.
template<typename T, typename U, typename... Ts>
struct are_unique_helper<T, U, Ts...> : std::integral_constant<
bool,
!std::is_same<T, U> &&
are_unique_helper<T, Ts...>::value
> {};
}
// With integral_constant<bool>.
template<typename T, typename... Ts>
struct are_unique : std::integral_constant<bool, detail::are_unique_helper<T, Ts...>::value && are_unique<Ts...>::value> {};
// With bool_constant.
template<typename T, typename... Ts>
struct are_unique : std::bool_constant<detail::are_unique_helper<T, Ts...>::value && are_unique<Ts...>::value> {};
The name bool_constant conveys the same information as integral_constant<bool> with less wasted space or one less line, depending on coding style, and has the additional advantange of clearer conveyance thanks to emphasising the bool part. It's not strictly necessary, and can be easily supplied manually if your compiler doesn't support it, but it does provide a few benefits.
true_type and false_type:
These two provide specific constants for true and false; this is definitely useful, but many traits determine v with boolean logic. (See, e.g., std::is_same or are_unique above.) They're neither a be-all nor an end-all, though they can serve useful purposes such as base values for traits (as above, or as in std::is_same), or matching traits for overloading or SFINAE.
constexpr std::string_view isIntInner( std::true_type) { return "yes"; }
constexpr std::string_view isIntInner(std::false_type) { return " no"; }
template<typename T>
constexpr std::string_view isInt(T&&) {
return isIntInner(std::is_same<int, T>{});
}
Helpers: Type aliases & variable templates:
To explain the reason for the variable templates, we also want to look at them alongside the helper aliases defined in C++14.
template< bool B, class T = void >
using enable_if_t = typename enable_if<B,T>::type;
template< class T, class U >
inline constexpr bool is_same_v = is_same<T, U>::value;
These mainly exist as a form of convenience, really; they're a bit faster to type, a bit cleaner to read, and require a bit less finger gymnastics. The type aliases were provided first, and the helper variables are mainly there for consistency with them.
How to implement these:
You mentioned that you're aiming to implement everything using C++11 features primarily. This will allow you to provide most of the above:
integral_constant: Requires only C++11 or earlier features. (constexpr and noexcept.)
bool_constant: Introduced in C++17, but requires only C++11 features. (Alias template.)
true_type and false_type: Same as integral_constant.
Introspective logic type traits: Same as integral_constant.
Helper _t aliases: Introduced in C++14, but requires only C++11 features. (Alias template.)
Helper _v variables: Requires C++14 features. (Variable template.)
It wouldn't actually be too hard to provide helper aliases and bool_constant for compilers which don't support it, as long as using templates is supported. Possibly by, e.g., providing them within your library's namespace, in a header which is only loaded on implementations that don't include the aliases, and/or which the library's consumer can enable or disable as necessary during compilation.
While it would take a lot of work to implement variable templates, however, you do have another, more C++11-compliant option: helper functions.
template<typename T, typename U>
inline constexpr bool is_same_v() noexcept {
return std::is_same<T, U>::value;
}
Providing functions of this sort will result in code nearly as clean as the helper variable templates, which can be cleanly switched over to the official variable templates for compilers which provide them. There are a few slight differences between helper functions and helper variables, though I'm not sure if there are any use cases that would actually care about them, and the library would ideally only provide helper functions for compilers which don't themselves provide the _v variables.
I tried to implement a value template similar to std::is_constructible with the exception to only be true when the type is copiable in a constexpr environment (i.e. its copy constructor is constexpr qualified). I arrived at the following code:
#include <type_traits>
struct Foo {
constexpr Foo() = default;
constexpr Foo(const Foo&) = default;
};
struct Bar {
constexpr Bar() = default;
Bar(const Bar&);
};
namespace detail {
template <int> using Sink = std::true_type;
template<typename T> constexpr auto constexpr_copiable(int) -> Sink<(T(T()),0)>;
template<typename T> constexpr auto constexpr_copiable(...) -> std::false_type;
}
template<typename T> struct is_constexpr_copiable : decltype(detail::constexpr_copiable<T>(0)){ };
static_assert( is_constexpr_copiable<Foo>::value, "");
static_assert(!is_constexpr_copiable<Bar>::value, "");
Now I ask myself if this is according to standard, since compilers seem to disagree about the output.
https://godbolt.org/g/Aaqoah
Edit (c++17 features):
While implementing the somewhat different is_constexpr_constructible_from, with c++17's new auto non-type template type, I once again found a difference between compilers, when dereferencing a nullptr in a constexpr expression with SFINAE.
#include <type_traits>
struct Foo {
constexpr Foo() = default;
constexpr Foo(const Foo&) = default;
constexpr Foo(const Foo*f):Foo(*f) {};
};
struct Bar {
constexpr Bar() = default;
Bar(const Bar&);
};
namespace detail {
template <int> struct Sink { using type = std::true_type; };
template<typename T, auto... t> constexpr auto constexpr_constructible_from(int) -> typename Sink<(T(t...),0)>::type;
template<typename T, auto... t> constexpr auto constexpr_constructible_from(...) -> std::false_type;
}
template<typename T, auto... t> struct is_constexpr_constructible_from : decltype(detail::constexpr_constructible_from<T, t...>(0)){ };
constexpr Foo foo;
constexpr Bar bar;
static_assert( is_constexpr_constructible_from<Foo, &foo>::value, "");
static_assert(!is_constexpr_constructible_from<Foo, nullptr>::value, "");
static_assert(!is_constexpr_constructible_from<Bar, &bar>::value, "");
int main() {}
https://godbolt.org/g/830SCU
Edit: (April 2018)
Now that both compiler supposedly have support for C++17, I have found the following code to work even better (does not require a default constructor on `T`), but only on clang. Everything is still the same but replace the namespace `detail` with the following:
namespace detail {
template struct Sink {};
template constexpr auto sink(S) -> std::true_type;
template constexpr auto try_copy() -> Sink;
template constexpr auto constexpr_copiable(int) -> decltype(sink(std::declval,0)>>()));
template constexpr auto constexpr_copiable(...) -> std::false_type;
}
https://godbolt.org/g/3fB8jt
This goes very deep into parts of the standard about unevaluated context, and both compilers refuse to allow replacing `const T*` with `const T&` and using `std::declval()` instead of the `nullptr`-cast. Should I get confirmation that clang's behaviour is the accepted standardized behaviour, I will lift this version to an answer as it requires only exactly what has been asked.
Clang accepts some undefined behaviour, dereferencing nullptr, in the evaluation of an unevaluated operand of decltype.
The toughest of the challenges, giving a single function evaluating whether a constexpr constructor from const T& exists for arbitrary T, given here seems hardly possible in C++17. Luckily, we can get a long way without. The reasoning for this goes as follows:
Knowing the problem space
The following restrictions are important for determining if some expression can be evaluated in constexpr content:
To evaluate the copy constructor of T, a value of type const T& is needed. Such a value must refer to an object with active lifetime, i.e. in constexpr context it must refer to some value created in a logically enclosing expression.
In order to create this reference as a result of temporary promotion for arbitrary T as we would need to know and call a constructor, whose arguments could involve virtually arbitrary other expressions whose constexpr-ness we would need to evaluate. This looks like it requires solving the general problem of determining the constexprness of general expressions, as far as I can understand. ¹
¹ Actually, If any constructor with arguments, including the copy constructor, is defined as constexpr, there must be some valid way of constructing a T, either as aggregate initialization or through a constructor. Otherwise, the program would be ill-formed, as can be determined by the requirements of the constexpr specifier §10.1.5.5:
For a constexpr function or constexpr constructor that is neither defaulted nor a template, if no argument values exist such that an invocation of the function or constructor could be an evaluated subexpression of a core constant expression, or, for a constructor, a constant initializer for some object ([basic.start.static]), the program is ill-formed, no diagnostic required.
This might give us a tiny loophole.²
So the expression best be an unevaluated operand §8.2.3.1
In some contexts, unevaluated operands appear ([expr.prim.req], [expr.typeid], [expr.sizeof], [expr.unary.noexcept], [dcl.type.simple], [temp]).
An unevaluated operand is not evaluated
Unevaluated operands are general expressions but they can not be required to be evaluatable at compile time as they are not evaluated at all. Note that the parameters of a template are not part of the unevaluated expressiont itself but rather part of the unqualified id naming the template type. That was part of my original confusion and tries in finding a possible implementation.
Non-type template arguments are required to be constant expressions §8.6 but this property is defined through evaluation (which we have already determined to not be generally possible). §8.6.2
An expression e is a core constant expression unless the evaluation of e, following the rules of the abstract machine, would [highlight by myself] evaluate one of the following expressions:
Using noexpect for the unevaluated context has the same problem: The best discriminator, inferred noexceptness, works only on function calls which can be evaluated as a core-constant expression, so the trick mentionend in this stackoverflow answer does not work.
sizeof has the same problems as decltype. Things may change with concepts.
The newly introduced if constexpr is, sadly, not an expression but a statement with an expression argument. It can therefore not help enforce the constexpr evaluatability of an expression. When the statement is evaluated, so is its expression and we are back at the problem of creating an evaluatable const T&. Discarded statements have not influence on the process at all.
Easy possibilities first
Since the hard part is creating const T&, we simply do it for a small number of common but easily determined possibilities and leave the rest to specialization by extremely special case callers.
namespace detail {
template <int> using Sink = std::true_type;
template<typename T,bool SFINAE=true> struct ConstexprDefault;
template<typename T>
struct ConstexprDefault<T, Sink<(T{}, 0)>::value> { inline static constexpr T instance = {}; };
template<typename T> constexpr auto constexpr_copiable(int) -> Sink<(T{ConstexprDefault<T>::instance}, 0)>;
template<typename T> constexpr auto constexpr_copiable(...) -> std::false_type;
}
template<typename T>
using is_constexpr_copyable_t = decltype(detail::constexpr_copiable<T>(0));
Specializing details::ConstexprDefault must be possible for any class type declaring a constexpr copy constructor, as seen above. Note that the argument does not hold for other compound types which don't have constructors §6.7.2. Arrays, unions, references and enumerations need special considerations.
A 'test suite' with a multitude of types can be found on godbolt. A big thank you goes to reddit user /u/dodheim from whom I have copied it. Additional specializations for the missing compound types are left as an exercise to the reader.
² or What does this leave us with?
Evaluation failure in template arguments is not fatal. SFINAE makes it possible to cover a wide range of possible constructors. The rest of this section is purely theoretical, not nice to compilers and might otherwise be plainly stupid.
It is potentially possible to enumerate many constructors of a type using methods similar to magic_get. Essentially, use a type Ubiq pretending to be convertible to all other types to fake your way through decltype(T{ ubiq<I>()... }) where I is a parameter pack with the currently inspected initializer item count and template<size_t i> Ubiq ubiq() just builds the correct amount of instances. Of course in this case the cast to T would need to be explicitely disallowed.
Why only many? As before, some constexpr constructor will exist but it might have access restrictions. This would give a false positive in our templating machine and lead to infinite search, and at some time the compiler would die :/. Or the constructor might be hidden by an overload which can not be resolved as Ubiq is too general. Same effect, sad compiler and a furious PETC (People for the ethical treatment of compilers™, not a real organization). Actually, access restrictions might be solvable by the fact that those do not apply in template arguments which may allow us to extract a pointer-to-member and [...].
I'll stop here. As far as I can tell, it's tedious and mostly unecessary. Surely, covering possible constructor invocations up 5 arguments will be enough for most use cases. Arbitrary T is very, very hard and we may as well wait for C++20 as template metaprogramming is once again about to change massively.
The class template ::std::numeric_limits<T> may only be instantiated for types T, which can be the return value of functions, since it always defines member functions like static constexpr T min() noexcept { return T(); } (see http://www.cplusplus.com/reference/limits/numeric_limits/ for more information of the non-specialised versions in c++03 or c++11).
If T is i.e. int[2] the instantiation will immediately lead to a compile time error, since int[2] cannot be the return value of a function.
Wrapping ::std::numeric_limits with a safe version is easy - if a way to determine if it is safe to instantiate ::std::numeric_limits is known. This is necessary, since the problematic functions should be accessible if possible.
The obvious (and obviously wrong) way of testing ::std::numeric_limits<T>::is_specialised does not work since it requires instantiation of the problematic class template.
Is there a way to test for safety of instantiation, preferably without enumerating all known bad types? Maybe even a general technique to determine if any class template instantiation is safe?
Concerning the type trait that decides whether a type can be returned for a function, here is how I would go about it:
#include <type_traits>
template<typename T, typename = void>
struct can_be_returned_from_function : std::false_type { };
template<typename T>
struct can_be_returned_from_function<T,
typename std::enable_if<!std::is_abstract<T>::value,
decltype(std::declval<T()>(), (void)0)>::type>
: std::true_type { };
On the other hand, as suggested by Tom Knapen in the comments, you may want to use the std::is_arithmetic standard type trait to determine whether you can specialize numeric_limits for a certain type.
Per paragraph 18.3.2.1/2 of the C++11 Standard on the numeric_limits class template, in fact:
Specializations shall be provided for each arithmetic type, both floating point and integer, including bool.
The member is_specialized shall be true for all such specializations of numeric_limits.
C++11 solutions
Since T only appears as the return type of static member functions in the declarations of the unspecialised ::std::numeric_limits<T> (see C++03 18.2.1.1 and C++11 18.3.2.3), it is enough for this specific problem to ensure that doing so is declaration-safe.
The reason this leads to a compile time error is, that the use of a template-argument may not give rise to an ill-formed construct in the instantiation of the template specialization (C++03 14.3/6, C++11 14.3/6).
For C++11 enabled projects, Andy Prowl's can_be_returned_from_function solution works in all relevant cases: http://ideone.com/SZB2bj , but it is not easily portable to a C++03 environment. It causes an error in when instantiated with an incomplete type ( http://ideone.com/k4Y25z ). The proposed solution will accept incomplete classes instead of causing an error. The current Microsoft compiler (msvc 1700 / VS2012) seems to dislike this solution and fail to compile.
Jonathan Wakely proposed a solution that works by utilizing std::is_convertible<T, T> to determine if T can be the return value of a function. This also eliminates incomplete classes, and is easy to show correct (it is defined in C++11 to do exactly what we want). Execution shows that all cases (arrays, arrays of undefined length, functions, abstract classes) which are known to be problematic are correctly recognized. As a bonus, it also correctly recognizes incomplete classes, which are not allowed as parameters to numeric_limits by the standards (see below), although they seem to cause no problems in practice, as long as no problematic functions are actually called. Test execution: http://ideone.com/zolXpp . Some current compilers (icc 1310 and msvc 1700, which is VS2012's compiler) generate incorrect results with this method.
Tom Knapen's is_arithmetic solution is a very concise C++11 solution, but requires the implementer of a type that specialises numeric_limits to also specialise is_arithmetic. Alternatively, a type that in its base case inherits from is_arithmetic (this type might be called numeric_limits_is_specialised) might be specialised in those cases, since specialising is_abstract might not be semantically correct (e.g. a type that does not specify all basic arithmetic operators, but still is a valid integer-like type).
This whitelisting approach ensures that even incomplete types are handled correctly, unless someone maliciously tries to force compilation errors.
Caveat
As shown by the mixed results, C++11 support remains spotty, even with current compilers, so your mileage with these solutions may vary. A C++03 solution will benefit from more consistent results and the ability to be used in projects that do not wish to switch to C++11.
Towards a robust C++03 solution
Paragraph C++11 8.3.5/8 lists the restrictions for return values:
If the type of a parameter includes a type of the form "pointer to array of unknown bound of T" or "reference to array of unknown bound of T", the program is ill-formed. Functions shall not have a return type of type array or function, although they may have a return type of type pointer or reference to such things. There shall be no arrays of functions, although there can be arrays of pointers to functions.
and goes on in paragraph C++11 8.3.5/9:
Types shall not be defined in return or parameter types. The type of a parameter or the return type for a function definition shall not be an incomplete class type (possibly cv-qualified) unless the function definition is nested within the member-specification for that class (including definitions in nested classes defined within the class).
Which is pretty much the same as paragraph C++03 8.3.5/6:
If the type of a parameter includes a type of the form "pointer to array of unknown bound of T" or "reference to array of unknown bound of T", the program is ill-formed. Functions shall not have a return type of type array or function, although they may have a return type of type pointer or reference to such things. There shall be no arrays of functions, although there can be arrays of pointers to functions. Types shall not
be defined in return or parameter types. The type of a parameter or the return type for a function definition shall not be an incomplete class type (possibly cv-qualified) unless the function definition is nested within the member-specification for that class (including definitions in nested classes defined within the class).
Another kind of problematic types is mentioned identically in C++11 10.4/3 and C++03 10.4/3:
An abstract class shall not be used as a parameter type, as a function return type, or as the type of an explicit conversion. [...]
The problematic functions are not nested within an incomplete class type (except of ::std::numeric_limits<T>, which cannot be their T), so we have four kinds of problematic values of T: Arrays, functions, incomplete class types and abstract class types.
Array Types
template<typename T> struct is_array
{ static const bool value = false; };
template<typename T> struct is_array<T[]>
{ static const bool value = true; };
template<typename T, size_t n> struct is_array<T[n]>
{ static const bool value = true; };
detects the simple case of T being an array type.
Incomplete Class Types
Incomplete class types interestingly do not lead to a compilation error just from instantiation, which means either the tested implementations are more forgiving than the standard, or I am missing something.
C++03 example: http://ideone.com/qZUa1N
C++11 example: http://ideone.com/MkA0Gr
Since I cannot come up with a proper way to detect incomplete types, and even the standard specifies (C++03 17.4.3.6/2 item 5)
In particular, the effects are undefined in the following cases: [...] if an incomplete type (3.9) is used as a template argument when instantiating a template component.
Adding only the following special allowance in C++11 (17.6.4.8/2):
[...] unless specifically allowed for that component
it seems safe to assume that anybody passing incomplete types as template parameters are on their own.
A complete list of the cases where C++11 allows incomplete type parameters is quite short:
declval
unique_ptr
default_delete (C++11 20.7.1.1.1/1: "The class template default_delete serves as the default deleter (destruction policy) for the class template unique_ptr."
shared_ptr
weak_ptr
enable_shared_from_this
Abstract Class & Function Types
Detecting functions is a bit more work than in C++11, since we do not have variadic templates in C++03. However, the above quotes on functions already contain the hint we need; functions may not be elements of arrays.
Paragraph C++11 8.3.4\1 contains the sentence
T is called the array element type; this type shall not be a reference type, the (possibly cv qualified) type void, a function type or an abstract class type.
which is also verbatim in paragraph C++03 8.3.4\1 and will allow us to test if a type is a function type. Detecting (cv) void and reference types is simple:
template<typename T> struct is_reference
{ static const bool value = false; };
template<typename T> struct is_reference<T&>
{ static const bool value = true; };
template<typename T> struct is_void
{ static const bool value = false; };
template<> struct is_void<void>
{ static const bool value = true; };
template<> struct is_void<void const>
{ static const bool value = true; };
template<> struct is_void<void volatile>
{ static const bool value = true; };
template<> struct is_void<void const volatile>
{ static const bool value = true; };
Using this, it is simple to write a meta function for abstract class types and functions:
template<typename T>
class is_abstract_class_or_function
{
typedef char (&Two)[2];
template<typename U> static char test(U(*)[1]);
template<typename U> static Two test(...);
public:
static const bool value =
!is_reference<T>::value &&
!is_void<T>::value &&
(sizeof(test<T>(0)) == sizeof(Two));
};
Note that the following meta function may be used to distinguish between the two, should one wish to make a distinct is_function and is_abstract_class
template<typename T>
class is_class
{
typedef char (&Two)[2];
template<typename U> static char test(int (U::*));
template<typename U> static Two test(...);
public:
static const bool value = (sizeof(test<T>(0)) == sizeof(char));
};
Solution
Combining all of the previous work, we can construct the is_returnable meta function:
template<typename T> struct is_returnable
{ static const bool value = !is_array<T>::value && !is_abstract_class_or_function<T>::value; };
Execution for C++03 (gcc 4.3.2): http://ideone.com/thuqXY
Execution for C++03 (gcc 4.7.2): http://ideone.com/OR4Swf
Execution for C++11 (gcc 4.7.2): http://ideone.com/zIu7GJ
As expected, all test cases except for the incomplete class yield the correct answer.
In addition to the above test runs, this version is tested (with the exact same test program) to yield the same results w/o warnings or errors on:
MSVC 1700 (VS2012 with and w/o XP profile), 1600 (VS2010), 1500 (VS2008)
ICC Win 1310
GCC (C++03 and C++11/C++0x mode) 4.4.7, 4.6.4, 4.8.0 and a 4.9 snapshot
Restrictions for either case
Note that, while this approach in either version works for any numeric_limits implementation that does not extend upon the implementation shown in the standard, it is by no means a solution to the general problem, and in fact may theoretically lead to problems with weird but standard compliant implementations (e.g. ones which add private members).
Incomplete classes remain a problem, but it seems silly to require higher robustness goals than the standard library itself.
std::is_convertible<T, T>::value will tell you if a type can be returned from a function.
is_convertible<T1, T2> is defined in terms of a function returning a T2 converted from an expression of type T1.
#include <limits>
#include <type_traits>
struct Incomplete;
struct Abstract { virtual void f() = 0; };
template<typename T>
using is_numeric_limits_safe = std::is_convertible<T, T>;
int main()
{
static_assert(!is_numeric_limits_safe<Incomplete>::value, "Incomplete");
static_assert(!is_numeric_limits_safe<Abstract>::value, "Abstract");
static_assert(!is_numeric_limits_safe<int[2]>::value, "int[2]");
}
This might not be exactly what you want, because it is safe to instantiate std::numeric_limits<Incomplete> as long as you don't call any of the functions that return by value. It's not possible to instantiate std::numeric_limits<int[2]> though.
Here's a better test (using SFINAE) which gives is_numeric_limits_safe<Incomplete>::value==true
template<typename T>
class is_numeric_limits_unsafe
{
struct mu { };
template<typename U>
static U test(int);
template<typename U>
static mu test(...);
public:
typedef std::is_same<decltype(test<T>(0)), mu> type;
};
template<typename T>
struct is_numeric_limits_safe
: std::integral_constant<bool, !is_numeric_limits_unsafe<T>::type::value>
{ };
Say I want do something for integral types but not chars and I have
is_integral<T>::type and is_char<T>::type
Is it possible to write this:
integral_constant<bool,is::integral<T>::value && !is_char<T>::value>
more readable
Those kinds of metacomputations were done even before C++11 and Boost.MPL is the heavy artillery of TMP in C++03. With it, your requirements can be expressed like so:
// in C++03:
// typedef /* compute */ result;
using result = and_<is_integral<T>, not_<is_char<T>>>;
(Where and_ and not_ are from the boost::mpl namespaces.)
Notice the limited verbosity because you don't have to put with the ::value boilerplate. Similarly, result is lazy, where you can force computing the result with either result::value (which would be true or false) or result::type (which would be, well, a type -- the documentation has all the details). Boost.MPL makes it easy not to do that though, so for instance not_<result> is enough to invert the logic, even though not_<result::type> would also work. A good thing considering that an additoinal typename is needed if result is dependent.
I do consider Boost.MPL of enormous help in C++03 partly because it emulates variadic templates. For instance, and_ is not restricted to two arguments. I've shied away from it in C++11 however because in most situations where I used it I now use pack expansion. Things like and_ are still useful though because it is not possible to expand a pack in arbitrary expressions, e.g. in one that involves the && logical operator. This can only be done with a logical metafunction:
// Enforce precondition: every type T must be integral
static_assert( and_<std::is_integral<T>...>::value, "Violation" );
I consider this article a good read that gives good hints to reduce the verbosity of metacomputations. It focuses on SFINAE as used for generic programming but it is applicable to TMP as well (and SFINAE can be used in TMP as well). The final example is
template <typename T,
EnableIf<is_scalable<T>, is_something_else<T>>...>
T twice(T t) { return 2*t; }
which could look like this in C++03, using facilities similar to those provided in Boost:
template<typename T>
typename enable_if<
and_<is_scalable<T>, is_something_else<T>>
, T
>::type twice(T t) { return 2*t; }
A naive C++11 version arguably looks the worst:
template<typename T>
typename std::enable_if<
is_scalable<T>::value && is_something_else<T>::value
, T
>::type twice(T t) { return 2*t; }
I've seen some that advocate moving away from the style of Boost.MPL, which favours types and type-'returning' metafunctions towards using values and constexpr functions. This could look like:
// EnableIf alias now accepts non-type template parameters
template<typename T
, EnableIf<is_scalable<T>() && is_something_else<T>()>...>
T twice(T t) { return 2*t; }
You'd still need a constexpr function to compute e.g. the logical disjunction if T... is a pack though: EnableIf<any(is_foo<T>()...)>.
Why do you need an integral constant? If you just need a compile-time constant bool then you don't have to wrap the expression.
is_integral<T>::value && !is_char<T>::value
If you need this expression in several places you can just write you own special type trait.
template<typename T>
struct is_integral_and_not_char {
static const bool value = is_integral<T>::value && !is_char<T>::value;
};
is_integral_and_not_char<T>::value
Or if you do want to conform to the UnaryTypeTrait concept then
template<typename T>
struct is_integral_and_not_char
: std::integral_constant<bool, std::is_integral<T>::value && !std::is_char<T>::value>
{}