Floating-point types as template parameter in C++20 - c++

According to cppreference C++20 now supports floating-point parameters in templates.
I am, however, unable to find any compiler support information on that site as well as on others.
Current gcc trunk just does it, the others are negative.
I would just like to know if this is a very low-priortity feature and/or when to expect it to become commonly supported.
The only related thing I can find is:
P0732R2 Class types in non-type template parameters. Kudos if anyone could briefly explain that instead.

It seems that the real question that can be answered here is about the history of this feature, so that whatever compiler support can be understood in context.
Limitations on non-type template parameter types
People have been wanting class-type non-type template parameters for a long time. The answers there are somewhat lacking; what really makes support for such template parameters (really, of non-trivial user-defined types) complicated is their unknown notion of identity: given
struct A {/*...*/};
template<A> struct X {};
constexpr A f() {/*...*/}
constexpr A g() {/*...*/}
X<f()> xf;
X<g()> &xg=xf; // OK?
how do we decide whether X<f()> and X<g()> are the same type? For integers, the answer seems intuitively obvious, but a class type might be something like std::vector<int>, in which case we might have
// C++23, if that
using A=std::vector<int>;
constexpr A f() {return {1,2,3};}
constexpr A g() {
A ret={1,2,3};
ret.reserve(1000);
return ret;
}
and it's not clear what to make of the fact that both objects contain the same values (and hence compare equal with ==) despite having very different behavior (e.g., for iterator invalidation).
P0732 Class types in non-type template parameters
It's true that this paper first added support for class-type non-type template parameters, in terms of the new <=> operator. The logic was that classes that defaulted that operator were "transparent to comparisons" (the term used was "strong structural equality") and so programmers and compilers could agree on a definition of identity.
P1185 <=> != ==
Later it was realized that == should be separately defaultable for performance reasons (e.g., it allows an early exit for comparing strings of different lengths), and the definition of strong structural equality was rewritten in terms of that operator (which comes for free along with a defaulted <=>). This doesn't affect this story, but the trail is incomplete without it.
P1714 NTTP are incomplete without float, double, and long double!
It was discovered that class-type NTTPs and the unrelated feature of constexpr std::bit_cast allowed a floating-point value to be smuggled into a template argument inside a type like std::array<std::byte,sizeof(float)>. The semantics that would result from such a trick would be that every representation of a float would be a different template argument, despite the fact that -0.0==0.0 and (given float nan=std::numeric_limits<float>::quiet_NaN();) nan!=nan. It was therefore proposed that floating-point values be allowed directly as template arguments, with those semantics, to avoid encouraging widespread adoption of such a hacky workaround.
At the time, there was a lot of confusion around the idea that (given template<auto> int vt;) x==y might differ from &vt<x>==&vt<y>), and the proposal was rejected as needing more analysis than could be afforded for C++20.
P1907R0 Inconsistencies with non-type template parameters
It turns out that == has a lot of problems in this area. Even enumerations (which have always been allowed as template parameter types) can overload ==, and using them as template arguments simply ignores that overload entirely. (This is more or less necessary: such an operator might be defined in some translation units and not others, or might be defined differently, or have internal linkage, etc.) Moreover, what an implementation needs to do with a template argument is canonicalize it: to compare one template argument (in, say, a call) to another (in, say, an explicit specialization) would require that the latter had somehow already been identified in terms of the former while somehow allowing the possibility that they might differ.
This notion of identity already differs from == for other types as well. Even P0732 recognized that references (which can also be the type of template parameters) aren't compared with ==, since of course x==y does not imply that &x==&y. Less widely appreciated was that pointers-to-members also violate this correspondence: because of their different behavior in constant evaluation, pointers to different members of a union are distinct as template arguments despite comparing ==, and pointers-to-members that have been cast to point into a base class have similar behavior (although their comparison is unspecified and hence disallowed as a direct component of constant evaluation).
In fact, in November 2019 GCC had already implemented basic support for class-type NTTPs without requiring any comparison operator.
P1837 Remove NTTPs of class type from C++20
These incongruities were so numerous that it had already been proposed that the entire feature be postponed until C++23. In the face of so many problems in so popular a feature, a small group was commissioned to specify the significant changes necessary to save it.
P1907R1 (structural types)
These stories about template arguments of class type and of floating-point type reconverge in the revision of P1907R0 which retained its name but replaced its body with a solution to National Body comments that had also been filed on the same subject. The (new) idea was to recognize that comparisons had never really been germane, and that the only consistent model for template argument identity was that two arguments were different if there was any means of distinguishing them during constant evaluation (which has the aforementioned power to distinguish pointers-to-members, etc.). After all, if two template arguments produce the same specialization, that specialization must have one behavior, and it must be the same as would be obtained from using either of the arguments directly.
While it would be desirable to support a wide range of class types, the only ones that could be reliably supported by what was a new feature introduced (or rather rewritten) at almost the last possible moment for C++20 were those where every value that could be distinguished by the implementation could be distinguished by its clients—hence, only those that have all public members (that recursively have this property). The restrictions on such structural types are not quite as strong as those on an aggregate, since any construction process is permissible so long as it is constexpr. It also has plausible extensions for future language versions to support more class types, perhaps even std::vector<T>—again, by canonicalization (or serialization) rather than by comparison (which cannot support such extensions).
The general solution
This newfound understanding has no relationship to anything else in C++20; class-type NTTPs using this model could have been part of C++11 (which introduced constant expressions of class type). Support was immediately extended to unions, but the logic is not limited to classes at all; it also established that the longstanding prohibitions of template arguments that were pointers to subobjects or that had floating-point type had also been motivated by confusion about == and were unnecessary. (While this doesn't allow string literals to be template arguments for technical reasons, it does allow const char* template arguments that point to the first character of static character arrays.)
In other words, the forces that motivated P1714 were finally recognized as inevitable mathematical consequences of the fundamental behavior of templates and floating-point template arguments became part of C++20 after all. However, neither floating-point nor class-type NTTPs were actually specified for C++20 by their original proposals, complicating "compiler support" documentation.

Related

How can one implement is_aggregate without compiler magic? [duplicate]

C++11 provides standard <type_traits>.
Which of them are impossible to implement without compiler hooks?
Note 1: by compiler hook I mean any non-standard language feature such as __is_builtin....
Note 2: a lot of them can be implemented without hooks (see chapter 2 of C++ Template Metaprogramming and/or chapter 2 of Modern C++ Design).
Note 3: spraff answer in this previous question cites N2984 where some type traits contain the following note: is believed to require compiler support (thanks sehe).
I have written up a complete answer here — it's a work in progress, so I'm giving the authoritative hyperlink even though I'm cutting-and-pasting the text into this answer.
Also see libc++'s documentation on Type traits intrinsic design.
is_union
is_union queries an attribute of the class that isn't exposed through any other means;
in C++, anything you can do with a class or struct, you can also do with a union. This
includes inheriting and taking member pointers.
is_aggregate, is_literal_type, is_pod, is_standard_layout, has_virtual_destructor
These traits query attributes of the class that aren't exposed through any other means.
Essentially, a struct or class is a "black box"; the C++ language gives us no way to
crack it open and examine its data members to find out if they're all POD types, or if
any of them are private, or if the class has any constexpr constructors (the key
requirement for is_literal_type).
is_abstract
is_abstract is an interesting case. The defining characteristic of an abstract
class type is that you cannot get a value of that type; so for example it is
ill-formed to define a function whose parameter or return type is abstract, and
it is ill-formed to create an array type whose element type is abstract.
(Oddly, if T is abstract, then SFINAE will apply to T[] but not to T(). That
is, it is acceptable to create the type of a function with an abstract return type;
it is ill-formed to define an entity of such a function type.)
So we can get very close to a correct implementation of is_abstract using
this SFINAE approach:
template<class T, class> struct is_abstract_impl : true_type {};
template<class T> struct is_abstract_impl<T, void_t<T[]>> : false_type {};
template<class T> struct is_abstract : is_abstract_impl<remove_cv_t<T>, void> {};
However, there is a flaw! If T is itself a template class, such as vector<T>
or basic_ostream<char>, then merely forming the type T[] is acceptable; in
an unevaluated context this will not cause the compiler to go instantiate the
body of T, and therefore the compiler will not detect the ill-formedness of
the array type T[]. So the SFINAE will not happen in that case, and we'll
give the wrong answer for is_abstract<basic_ostream<char>>.
This quirk of template instantiation in unevaluated contexts is the sole reason
that modern compilers provide __is_abstract(T).
is_final
is_final queries an attribute of the class that isn't exposed through any other means.
Specifically, the base-specifier-list of a derived class is not a SFINAE context; we can't
exploit enable_if_t to ask "can I create a class derived from T?" because if we
cannot create such a class, it'll be a hard error.
is_empty
is_empty is an interesting case. We can't just ask whether sizeof (T) == 0 because
in C++ no type is ever allowed to have size 0; even an empty class has sizeof (T) == 1.
"Emptiness" is important enough to merit a type trait, though, because of the Empty Base
Optimization: all sufficiently modern compilers will lay out the two classes
struct Derived : public T { int x; };
struct Underived { int x; };
identically; that is, they will not lay out any space in Derived for the empty
T subobject. This suggests a way we could test for "emptiness" in C++03, at least
on all sufficiently modern compilers: just define the two classes above and ask
whether sizeof (Derived) == sizeof (Underived). Unfortunately, as of C++11, this
trick no longer works, because T might be final, and the "final-ness" of a class
type is not exposed by any other means! So compiler vendors who implement final
must also expose something like __is_empty(T) for the benefit of the standard library.
is_enum
is_enum is another interesting case. Technically, we could implement this type trait
by the observation that if our type T is not a fundamental type, an array type,
a pointer type, a reference type, a member pointer, a class or union, or a function
type, then by process of elimination it must be an enum type. However, this deductive
reasoning breaks down if the compiler happens to support any other types not falling
into the above categories. For this reason, modern compilers expose __is_enum(T).
A common example of a supported type not falling into any of the above categories
would be __int128_t. libc++ actually detects the presence of __int128_t and includes
it in the category of "integral types" (which makes it a "fundamental type" in the above
categorization), but our simple implementation does not.
Another example would be vector int, on compilers supporting Altivec vector extensions;
this type is more obviously "not integral" but also "not anything else either", and most
certainly not an enum type!
is_trivially_constructible, is_trivially_assignable
The triviality of construction, assignment, and destruction are all attributes of the
class that aren't exposed through any other means. Notice that with this foundation
we don't need any additional magic to query the triviality of default construction,
copy construction, move assignment, and so on. Instead,
is_trivially_copy_constructible<T> is implemented in terms of
is_trivially_constructible<T, const T&>, and so on.
is_trivially_destructible
For historical reasons, the name of this compiler builtin is not __is_trivially_destructible(T)
but rather __has_trivial_destructor(T). Furthermore, it turns out that the builtin
evaluates to true even for a class type with a deleted destructor! So we first need
to check that the type is destructible; and then, if it is, we can ask the magic builtin
whether that destructor is indeed trivial.
underlying_type
The underlying type of an enum isn't exposed through any other means. You can get close
by taking sizeof(T) and comparing it to the sizes of all known types, and by asking
for the signedness of the underlying type via T(-1) < T(0); but that approach still
cannot distinguish between underlying types int and long on platforms where those
types have the same width (nor between long and long long on platforms where those
types have the same width).
Per lastest boost documentation which is also a bit old but I think it's valid still
Support for Compiler Intrinsics
There are some traits that can not be implemented within the current C++ language: to make these traits "just work" with user defined types, some kind of additional help from the compiler is required. Currently (April 2008) Visual C++ 8 and 9, GNU GCC 4.3 and MWCW 9 provide at least some of the the necessary intrinsics, and other compilers will no doubt follow in due course.
The Following traits classes always need compiler support to do the right thing for all types (but all have safe fallback positions if this support is unavailable):
is_union
is_pod
has_trivial_constructor
has_trivial_copy
has_trivial_move_constructor
has_trivial_assign
has_trivial_move_assign
has_trivial_destructor
has_nothrow_constructor
has_nothrow_copy
has_nothrow_assign
has_virtual_destructor
The following traits classes can't be portably implemented in the C++ language, although in practice, the implementations do in fact do the right thing on all the compilers we know about:
is_empty
is_polymorphic
The following traits classes are dependent on one or more of the above:
is_class

Practical meaning of std::strong_ordering and std::weak_ordering

I've been reading a bit about C++20's consistent comparison (i.e. operator<=>) but couldn't understand what's the practical difference between std::strong_ordering and std::weak_ordering (same goes for the _equality version for this manner).
Other than being very descriptive about the substitutability of the type, does it actually affect the generated code? Does it add any constraints for how one could use the type?
Would love to see a real-life example that demonstrates this.
Does it add any constraints for how one could use the type?
One very significant constraint (which wasn't intended by the original paper) was the adoption of the significance of strong_ordering by P0732 as an indicator that a class type can be used as a non-type template parameter. weak_ordering isn't sufficient for this case due to how template equivalence has to work. This is no longer the case, as non-type template parameters no longer work this way (see P1907R0 for explanation of issues and P1907R1 for wording of the new rules).
Generally, it's possible that some algorithms simply require weak_ordering but other algorithms require strong_ordering, so being able to annotate that on the type might mean a compile error (insufficiently strong ordering provided) instead of simply failing to meet the algorithm's requirements at runtime and hence just being undefined behavior. But all the algorithms in the standard library and the Ranges TS that I know of simply require weak_ordering. I do not know of one that requires strong_ordering off the top of my head.
Does it actually affect the generated code?
Outside of the cases where strong_ordering is required, or an algorithm explicitly chooses different behavior based on the comparison category, no.
There really isn't any reason to have std::weak_ordering. It's true that the standard describes operations like sorting in terms of a "strict" weak order, but there's an isomorphism between strict weak orderings and a totally ordered partition of the original set into incomparability equivalence classes. It's rare to encounter generic code that is interested both in the order structure (which considers each equivalence class to be one "value") and in some possibly finer notion of equivalence: note that when the standard library uses < (or <=>) it does not use == (which might be finer).
The usual example for std::weak_ordering is a case-insensitive string, since for instance printing two strings that differ only by case certainly produces different behavior despite their equivalence (under any operator). However, lots of types can have different behavior despite being ==: two std::vector<int> objects, for instance, might have the same contents and different capacities, so that appending to them might invalidate iterators differently.
The simple fact is that the "equality" implied by std::strong_ordering::equivalent but not by std::weak_ordering::equivalent is irrelevant to the very code that stands to benefit from it, because generic code doesn't depend on the implied behavioral changes, and non-generic code doesn't need to distinguish between the ordering types because it knows the rules for the type on which it operates.
The standard attempts to give the distinction meaning by talking about "substitutability", but that is inevitably circular because it can sensibly refer only to the very state examined by the comparisons. This was discussed prior to publishing C++20, but (perhaps for the obvious reasons) not much of the planned further discussion has taken place.

C++: are "trait" and "meta-function" synonymous?

Or, does trait perhaps refer to a specific way to utilizing meta-functions?
If they are not synonymous, please point me to some examples of traits which are not meta-functions or those of meta-functions which are not traits. An actually working piece of code, perhaps within STL or Boost libraries, would be appreciated rather than a contrived toy example.
I'd like to see how experts in this field of C++ programming use these terminologies. I'm not sure if there are authoritative definitions of them...
Thanks in advance!
Clarification: It's not that I looking for any examples of traits or meta-functions. I've been using tens (if not hundreds) of them at my day job.
"Venn0110". Licensed under Public Domain via Commons.
"Meta" is C++ terminology for template programming. A clear example is the Boost Meta Programming Library (MPL).
In this sense, a meta-function is a function whose domain is not objects but C++ constructs. The common inputs are therefore types, ordinary functions and other templates.
A simple meta-function is for example template<typename T> using Foo = Bar<T, int> which has as its input a type T and as its output a type Foo<T>. Trivial, yes, but ordinary functions can be trivial too.
A trait is a metafunction whose codomain is a constant expression, often boolean. E.g. is_foo<T>::value obviously is boolean. The oldest trait in this sense is sizeof(T) whose codomain is size_t.
The terms are not equivalent.
Meta functions
The term is not defined in the C++ Standard.
There seem to be two main contending definitions:
1) "meta-functions derive/calculate/map-to values or types (based on their arguments/parameters) at compile time", and/or
constexpr functions may or may not be included in any given person's definition/conception; there's functional overlap, but much existing writing mentioning meta functions predates or just doesn't also discuss constexpr functions, so it's hard to know whether any definition or examples given deliberately leaves room for or excludes them
2) "meta-functions are functions and/or classes utilising template meta-programming", which specifically means a core of code (the template) may be reused for different parameters, performing some code generation or transformation based thereon
it's not necessarily the case that a meta-programming's final product is a compile-time type or constant - it may be a function intended for use with runtime values or data
A small survey of top google results for "meta function c++"
"metafunctions accept types and compile-time constants as parameters and return types/constants. A metafunction, contrary to its name, is a class template."
"code...executed and its result used while...compiling. ...meta-functions can compute two things: values or types"
this article uses "meta function" to refer to class templates yielding compile-time values/types
Traits
The C++ Standard doesn't define "traits", but does define "traits class"es:
17.3.25 [defns.traits] traits class
a class that encapsulates a set of types and functions necessary for class templates and function templates to manipulate objects of types for which they are instantiated
[ Note: Traits classes defined in Clauses 21, 22 and 27 are character traits, which provide the character handling support needed by the string and iostream classes. —end note ]
I'd add that traits can have values, as well as types and functions - for example rank exposes ::value. Traits provide types/values offering insight into either the parameter type itself, or the behaviour desired of the system using the trait when that system's working on variables of that type.
The Standard Library's character traits contain example of runtime functionality: for example X::length(p), X::find(p, n, c).
The <type_traits> header in the C++ Standard Library is a good place to get an initial feel for what kind of things traits can be used for.
Traits are traditionally and typically (but now C++11 provides constexpr functions not necessarily) classes, as distinct from functions meta- or otherwise.
A trait might tell you if a parameter T is constant or not, whether it's serialisable, or whether it should be compressed when transmitted via TCP: whatever some other general-purpose code might need to customise its behaviour for the range of types it handles.
Sometimes traits will be provided by the system that uses them - other times they can be supplied by client code wishing to customise its behaviour on a per-type basis.
Traits may deduce things using various tests of the parameter types, or they may be hand-crafted specialisations hard-coding values for specific types.
This ACCU introduction to traits](http://accu.org/index.php/journals/442) is worth reading, and includes this quote from the creator of C++:
Think of a trait as a small object whose main purpose is to carry information used by another object or algorithm to determine "policy" or "implementation details". - Bjarne Stroustrup
Contrasting "meta function" with "traits"
Regardless of which definition of meta function you adopt, the criteria relates to implementation details: when the function can be evaluated, and/or whether it involves code generation/transformation. That contrasts with traits, where the key concept/requirement is the purpose to which they're put, not the common - but far from universal - implementation using template specialisations or instantiations, or any capacity for compile time vs run-time evaluation.
A metafunction (in the context of C++) is a function that produces a result that can be used at compile time. The result can either be types (which can only be obtained at compile time by definition, using techniques such as partial specialisation) or values (which can be computed at compile time, using the fact that template parameters can be integral values, not just types).
A trait is essentially a class (or object) that packages up information (policy contraints, type characteristics, implementation details) for use by another class (or object) at compile time. The information packaged up can consist of type information (e.g. typedefs) or properties of interest. For example, std::numeric_limits<T> (available through <limits>) is a "type trait" which provides information about arithmetic types (e.g. is T an integral type? is the set of values a T can represent bounded or finite? is T signed? etc). The information is packaged in a form so that metaprograms (i.e. template functions or classes) can use the information at compile time. For example, a template metafunction might use information from a type trait to ensure the template is only instantiated for unsigned integral types, and trigger a compilation error if an attempt is made to instantiate the template for other types.
In this sense, a trait is a type of metafunction that provides information at compile time for use by other metafunctions.
As far as I know, there aren't any formal definitions of these terms. So I'll provide the ones that I use.
Metafunctions are templates which return a type. Usually, the returned result is a type which contains a member type named "type" so it can be accessed via the suffix ::type. Example invoking a metafunction:
using new_type = std::common_type<int, char>::type;
(see http://en.cppreference.com/w/cpp/types/common_type)
A trait would be be special type of metafunction which returns a result convertible to bool. Example:
bool ic = std::is_convertible<char, int>::type;
Note that nowadays, the requirement that a metafunction result have a member name "type" is relaxed so some metafunctions might be defined in such a manner that the ::type can be dropped. Example:
bool ic = std::experimental::is_convertible_v<char, int>;

Specializing std::optional

Will it be possible to specialize std::optional for user-defined types? If not, is it too late to propose this to the standard?
My use case for this is an integer-like class that represents a value within a range. For instance, you could have an integer that lies somewhere in the range [0, 10]. Many of my applications are sensitive to even a single byte of overhead, so I would be unable to use a non-specialized std::optional due to the extra bool. However, a specialization for std::optional would be trivial for an integer that has a range smaller than its underlying type. We could simply store the value 11 in my example. This should provide no space or time overhead over a non-optional value.
Am I allowed to create this specialization in namespace std?
The general rule in 17.6.4.2.1 [namespace.std]/1 applies:
A program may add a template specialization for any standard library template to namespace std only if the declaration depends on a user-defined type and the specialization meets the standard library requirements for the original template and is not explicitly
prohibited.
So I would say it's allowed.
N.B. optional will not be part of the C++14 standard, it will be included in a separate Technical Specification on library fundamentals, so there is time to change the rule if my interpretation is wrong.
If you are after a library that efficiently packs the value and the "no-value" flag into one memory location, I recommend looking at compact_optional. It does exactly this.
It does not specialize boost::optional or std::experimental::optional but it can wrap them inside, giving you a uniform interface, with optimizations where possible and a fallback to 'classical' optional where needed.
I've asked about the same thing, regarding specializing optional<bool> and optional<tribool> among other examples, to only use one byte. While the "legality" of doing such things was not under discussion, I do think that one should not, in theory, be allowed to specialize optional<T> in contrast to eg.: hash (which is explicitly allowed).
I don't have the logs with me but part of the rationale is that the interface treats access to the data as access to a pointer or reference, meaning that if you use a different data structure in the internals, some of the invariants of access might change; not to mention providing the interface with access to the data might require something like reinterpret_cast<(some_reference_type)>. Using a uint8_t to store a optional-bool, for example, would impose several extra requirements on the interface of optional<bool> that are different to the ones of optional<T>. What should the return type of operator* be, for example?
Basically, I'm guessing the idea is to avoid the whole vector<bool> fiasco again.
In your example, it might not be too bad, as the access type is still your_integer_type& (or pointer). But in that case, simply designing your integer type to allow for a "zombie" or "undetermined" value instead of relying on optional<> to do the job for you, with its extra overhead and requirements, might be the safest choice.
Make it easy to opt-in to space savings
I have decided that this is a useful thing to do, but a full specialization is a little more work than necessary (for instance, getting operator= correct).
I have posted on the Boost mailing list a way to simplify the task of specializing, especially when you only want to specialize some instantiations of a class template.
http://boost.2283326.n4.nabble.com/optional-Specializing-optional-to-save-space-td4680362.html
My current interface involves a special tag type used to 'unlock' access to particular functions. I have creatively named this type optional_tag. Only optional can construct an optional_tag. For a type to opt-in to a space-efficient representation, it needs the following member functions:
T(optional_tag) constructs an uninitialized value
initialize(optional_tag, Args && ...) constructs an object when there may be one in existence already
uninitialize(optional_tag) destroys the contained object
is_initialized(optional_tag) checks whether the object is currently in an initialized state
By always requiring the optional_tag parameter, we do not limit any function signatures. This is why, for instance, we cannot use operator bool() as the test, because the type may want that operator for other reasons.
An advantage of this over some other possible methods of implementing it is that you can make it work with any type that can naturally support such a state. It does not add any requirements such as having a move constructor.
You can see a full code implementation of the idea at
https://bitbucket.org/davidstone/bounded_integer/src/8c5e7567f0d8b3a04cc98142060a020b58b2a00f/bounded_integer/detail/optional/optional.hpp?at=default&fileviewer=file-view-default
and for a class using the specialization:
https://bitbucket.org/davidstone/bounded_integer/src/8c5e7567f0d8b3a04cc98142060a020b58b2a00f/bounded_integer/detail/class.hpp?at=default&fileviewer=file-view-default
(lines 220 through 242)
An alternative approach
This is in contrast to my previous implementation, which required users to specialize a class template. You can see the old version here:
https://bitbucket.org/davidstone/bounded_integer/src/2defec41add2079ba023c2c6d118ed8a274423c8/bounded_integer/detail/optional/optional.hpp
and
https://bitbucket.org/davidstone/bounded_integer/src/2defec41add2079ba023c2c6d118ed8a274423c8/bounded_integer/detail/optional/specialization.hpp
The problem with this approach is that it is simply more work for the user. Rather than adding four member functions, the user must go into a new namespace and specialize a template.
In practice, all specializations would have an in_place_t constructor that forwards all arguments to the underlying type. The optional_tag approach, on the other hand, can just use the underlying type's constructors directly.
In the specialize optional_storage approach, the user also has the responsibility of adding proper reference-qualified overloads of a value function. In the optional_tag approach, we already have the value so we do not have to pull it out.
optional_storage also required standardizing as part of the interface of optional two helper classes, only one of which the user is supposed to specialize (and sometimes delegate their specialization to the other).
The difference between this and compact_optional
compact_optional is a way of saying "Treat this special sentinel value as the type being not present, almost like a NaN". It requires the user to know that the type they are working with has some special sentinel. An easily specializable optional is a way of saying "My type does not need extra space to store the not present state, but that state is not a normal value." It does not require anyone to know about the optimization to take advantage of it; everyone who uses the type gets it for free.
The future
My goal is to get this first into boost::optional, and then part of the std::optional proposal. Until then, you can always use bounded::optional, although it has a few other (intentional) interface differences.
I don't see how allowing or not allowing some particular bit pattern to represent the unengaged state falls under anything the standard covers.
If you were trying to convince a library vendor to do this, it would require an implementation, exhaustive tests to show you haven't inadvertently blown any of the requirements of optional (or accidentally invoked undefined behavior) and extensive benchmarking to show this makes a notable difference in real world (and not just contrived) situations.
Of course, you can do whatever you want to your own code.

What are the differences between concepts and template constraints?

I want to know what are the semantic differences between the C++ full concepts proposal and template constraints (for instance, constraints as appeared in Dlang or the new concepts-lite proposal for C++1y).
What are full-fledged concepts capable of doing than template constraints cannot do?
The following information is out of date. It needs to be updated according to the latest Concepts Lite draft.
Section 3 of the constraints proposal covers this in reasonable depth.
The concepts proposal has been put on the back burners for a short while in the hope that constraints (i.e. concepts-lite) can be fleshed out and implemented in a shorter time scale, currently aiming for at least something in C++14. The constraints proposal is designed to act as a smooth transition to a later definition of concepts. Constraints are part of the concepts proposal and are a necessary building block in its definition.
In Design of Concept Libraries for C++, Sutton and Stroustrup consider the following relationship:
Concepts = Constraints + Axioms
To quickly summarise their meanings:
Constraint - A predicate over statically evaluable properties of a type. Purely syntactic requirements. Not a domain abstraction.
Axioms - Semantic requirements of types that are assumed to be true. Not statically checked.
Concepts - General, abstract requirements of algorithms on their arguments. Defined in terms of constraints and axioms.
So if you add axioms (semantic properties) to constraints (syntactic properties), you get concepts.
Concepts-Lite
The concepts-lite proposal brings us only the first part, constraints, but this is an important and necessary step towards fully-fledged concepts.
Constraints
Constraints are all about syntax. They give us a way of statically discerning properties of a type at compile-time, so that we can restrict the types used as template arguments based on their syntactic properties. In the current proposal for constraints, they are expressed with a subset of propositional calculus using logical connectives like && and ||.
Let's take a look at a constraint in action:
template <typename Cont>
requires Sortable<Cont>()
void sort(Cont& container);
Here we are defining a function template called sort. The new addition is the requires clause. The requires clause gives some constraints over the template arguments for this function. In particular, this constraint says that the type Cont must be a Sortable type. A neat thing is that it can be written in a more concise form as:
template <Sortable Cont>
void sort(Cont& container);
Now if you attempt to pass anything that is not considered Sortable to this function, you'll get a nice error that immediately tells you that the type deduced for T is not a Sortable type. If you had done this in C++11, you'd have had some horrible error thrown from inside the sort function that makes no sense to anybody.
Constraints predicates are very similar to type traits. They take some template argument type and give you some information about it. Constraints attempt to answer the following kinds of questions about type:
Does this type have such-and-such operator overloaded?
Can these types be used as operands to this operator?
Does this type have such-and-such trait?
Is this constant expression equal to that? (for non-type template arguments)
Does this type have a function called yada-yada that returns that type?
Does this type meet all the syntactic requirements to be used as that?
However, constraints are not meant to replace type traits. Instead, they will work hand in hand. Some type traits can now be defined in terms of concepts and some concepts in terms of type traits.
Examples
So the important thing about constraints is that they do not care about semantics one iota. Some good examples of constraints are:
Equality_comparable<T>: Checks whether the type has == with both operands of that same type.
Equality_comparable<T,U>: Checks whether there is a == with left and right operands of the given types
Arithmetic<T>: Checks whether the type is an arithmetic type.
Floating_point<T>: Checks whether the type is a floating point type.
Input_iterator<T>: Checks whether the type supports the syntactic operations that an input iterator must support.
Same<T,U>: Checks whether the given type are the same.
You can try all this out with a special concepts-lite build of GCC.
Beyond Concepts-Lite
Now we get into everything beyond the concepts-lite proposal. This is even more futuristic than the future itself. Everything from here on out is likely to change quite a bit.
Axioms
Axioms are all about semantics. They specify relationships, invariants, complexity guarantees, and other such things. Let's look at an example.
While the Equality_comparable<T,U> constraint will tell you that there is an operator== that takes types T and U, it doesn't tell you what that operation means. For that, we will have the axiom Equivalence_relation. This axiom says that when objects of these two types are compared with operator== giving true, these objects are equivalent. This might seem redundant, but it's certainly not. You could easily define an operator== that instead behaved like an operator<. You'd be evil to do that, but you could.
Another example is a Greater axiom. It's all well and good to say two objects of type T can be compared with > and < operators, but what do they mean? The Greater axiom says that iff x is greater then y, then y is less than x. The proposed specification such an axiom looks like:
template<typename T>
axiom Greater(T x, T y) {
(x>y) == (y<x);
}
So axioms answer the following types of questions:
Do these two operators have this relationship with each other?
Does this operator for such-and-such type mean this?
Does this operation on that type have this complexity?
Does this result of that operator imply that this is true?
That is, they are concerned entirely with the semantics of types and operations on those types. These things cannot be statically checked. If this needs to be checked, a type must in some way proclaim that it adheres to these semantics.
Examples
Here are some common examples of axioms:
Equivalence_relation: If two objects compare ==, they are equivalent.
Greater: Whenever x > y, then y < x.
Less_equal: Whenever x <= y, then !(y < x).
Copy_equality: For x and y of type T: if x == y, a new object of the same type created by copy construction T{x} == y and still x == y (that is, it is non-destructive).
Concepts
Now concepts are very easy to define; they are simply the combination of constraints and axioms. They provide an abstract requirement over the syntax and semantics of a type.
As an example, consider the following Ordered concept:
concept Ordered<Regular T> {
requires constraint Less<T>;
requires axiom Strict_total_order<less<T>, T>;
requires axiom Greater<T>;
requires axiom Less_equal<T>;
requires axiom Greater_equal<T>;
}
First note that for the template type T to be Ordered, it must also meet the requirements of the Regular concept. The Regular concept is a very basic requirements that the type is well-behaved - it can be constructed, destroyed, copied and compared.
In addition to those requirements, the Ordered requires that T meet one constraint and four axioms:
Constraint: An Ordered type must have an operator<. This is statically checked so it must exist.
Axioms: For x and y of type T:
x < y gives a strict total ordering.
When x is greater than y, y is less than x, and vice versa.
When x is less than or equal to y, y is not less than x, and vice versa.
When x is greater than or equal to y, y is not greater than x, and vice versa.
Combining constraints and axioms like this gives you concepts. They define the syntactic and semantic requirements for abstract types for use with algorithms. Algorithms currently have to assume that the types used will support certain operations and express certain semantics. With concepts, we'll be able to ensure that requirements are met.
In the latest concepts design, the compiler will only check that the syntactic requirements of a concept are fulfilled by the template argument. The axioms are left unchecked. Since axioms denote semantics that are not statically evaluable (or often impossible to check entirely), the author of a type would have to explicitly state that their type meets all the requirements of a concept. This was known as concept mapping in previous designs but has since been removed.
Examples
Here are some examples of concepts:
Regular types are constructable, destructable, copyable, and can be compared.
Ordered types support operator<, and have a strict total ordering and other ordering semantics.
Copyable types are copy constructable, destructable, and if x is equal to y and x is copied, the copy will also compare equal to y.
Iterator types must have associated types value_type, reference, difference_type, and iterator_category which themselves must meet certain concepts. They must also support operator++ and be dereferenceable.
The Road to Concepts
Constraints are the first step towards a full concepts feature of C++. They are a very important step, because they provide the statically enforceable requirements of types so that we can write much cleaner template functions and classes. Now we can avoid some of the difficulties and ugliness of std::enable_if and its metaprogramming friends.
However, there are a number of things that the constraints proposal does not do:
It does not provide a concept definition language.
Constraints are not concept maps. The user does not need to specifically annotate their types as meeting certain constraints. They are statically checked used simple compile-time language features.
The implementations of templates are not constrained by the constraints on their template arguments. That is, if your function template does anything with an object of constrained type that it shouldn't do, the compiler has no way to diagnose that. A fully featured concepts proposal would be able to do this.
The constraints proposal has been designed specifically so that a full concepts proposal can be introduced on top of it. With any luck, that transition should be a fairly smooth ride. The concepts group are looking to introduce constraints for C++14 (or in a technical report soon after), while full concepts might start to emerge sometime around C++17.
See also "what's 'lite' about concepts lite" in section 2.3 of the recent (March 12) Concepts telecon minutes and record of discussion, which were posted the same day here: http://isocpp.org/blog/2013/03/new-paper-n3576-sg8-concepts-teleconference-minutes-2013-03-12-herb-sutter .
My 2 cents:
The concepts-lite proposal is not meant to do "type checking" of template implementation. I.e., Concepts-lite will ensure (notionally) interface compatibility at the template instantiation site. Quoting from the paper: "concepts lite is an extension of C++ that allows the use of predicates to constrain template arguments". And that's it. It does not say that template body will be checked (in isolation) against the predicates. That probably means there is no first-class notion of archtypes when you are talking about concepts-lite. archtypes, if I remember correctly, in concepts-heavy proposal are types that offer no less and no more to satisfy the implementation of the template.
concepts-lite use glorified constexpr functions with a bit of syntax trick supported by the compiler. No changes in the lookup rules.
Programmers are not required to write concepts maps.
Finally, quoting again "The constraints proposal does not directly address the specification or use of semantics; it is targeted only at checking syntax." That would mean axioms are not within the scope (so far).