Why cannot C type-generic expressions be compatible with C++? - c++

I seem to recall hearing vague comments from a few reliable sources (i.e. committee members speaking in non-official channels) that C type-generic expressions will not be added to C++ because they cannot be.
As far as I can tell, type-generic expressions are very limited compared to C++ templates and overloading, but there is no potential for interaction that would need to be defined as a special case.
A type-generic expression consists of a controlling expression, and a series of "associations" between types and subexpressions. A subexpression is chosen based on the static type of the controlling expression and the types listed for the subexpressions, and it is substituted in place of the TGE. Matching is based on the C concept of type compatibility, which as far as I can tell is equivalent to C++ identity of types with extern linkage under the one-definition rule (ODR).
It would be nice if a derived class controlling expression would select a base class association in C++, but since C doesn't have inheritance, such nicety isn't needed for cross-compatibility. Is this considered to be a stumbling block anyway?
Edit: As for more specific details, C11 already provides for preserving the value category (lvalue-ness) of the chosen subexpression, and seems to require that the TGE is a constant expression (of whatever category) as long as all its operands are, including the controlling expression. This is probably a C language defect. In any case, C++14 defines constant expressions in terms of what may be potentially evaluated, and the TGE spec already says that non-chosen subexpressions are unevaluated.
The point is that the TGE principle of operation seems simple enough to be transplanted without ever causing trouble later.
As for why C++ TGE's would be useful, aside from maximizing the intersection of C and C++, they could be used to essentially implement static_if, sans the highly controversial conditional declaration feature. I'm not a proponent of static_if, but "there is that."
template< typename t >
void f( t q ) {
auto is_big = _Generic( std::integral_constant< bool, sizeof q >= 4 >(),
std::true_type: std::string( "whatta whopper" ),
std::false_type: "no big deal"
);
auto message = _Generic( t, double: "double", int: 42, default: t );
std::cout << message << " " << is_big << '\n';
}

"Cannot be" is perhaps too strong. "Uncertain whether it's possible" is more likely.
If this feature were to be added to C++, it should be fully specified including all interactions with existing C++ features, many of which were not designed with _Generic in mind. E.g. typeid doesn't exist in C, but should work in C++. Can you use a _Generic expression as a constexpr? As a template parameter? Is it an lvalue ? Can you take its address and assign that to a function pointer, and get overload resolution? Can you do template argument deduction? Can you use it with auto? Etc.
Another complication is that the primary usecase for _Generic is for macro's, which don't play nice with C++ namespaces. The acos example from tgmath is a clear example. C++ already banned C's standard macro's and required them to be functions, but that won't work with tgmath.h.

Related

In OCaml Menhir, how to write a parser for C++/Rust/Java-style generics

In C++, a famous parsing ambiguity happens with code like
x<T> a;
Is it if T is a type, it is what it looks like (a declaration of a variable a of type x<T>, otherwise it is (x < T) > a (<> are comparison operators, not angle brackets).
In fact, we could make a change to make this become unambiguous: we can make < and > nonassociative. So x < T > a, without brackets, would not be a valid sentence anyway even if x, T and a were all variable names.
How could one resolve this conflict in Menhir? At first glance it seems we just can't. Even with the aforementioned modification, we need to lookahead an indeterminate number of tokens before we see another closing >, and conclude that it was a template instantiation, or otherwise, to conclude that it was an expression. Is there any way in Menhir to implement such an arbitrary lookahead?
Different languages (including the ones listed in your title) actually have very different rules for templates/generics (like what type of arguments there can be, where templates/generics can appear, when they are allowed to have an explicit argument list and what the syntax for template/type arguments on generic methods is), which strongly affect the options you have for parsing. In no language that I know is it true that the meaning of x<T> a; depends on whether T is a type.
So let's go through the languages C++, Java, Rust and C#:
In all four of those languages both types and functions/methods can be templates/generic. So we'll not only have to worry about an ambiguity with variable declarations, but also function/method calls: is f<T>(x) a function/method call with an explicit template/type argument or is it two relational operators with the last operand parenthesized? In all four languages template/generic functions/methods can be called without template/type when those can be inferred, but that inference isn't always possible, so just disallowing explicit template/type arguments for function/method calls is not an option.
Even if a language does not allow relational operators to be chained, we could get an ambiguity in expressions like this: f(a<b, c, d>(e)). Is this calling f with the three arguments a<b, c and d>e or with the single argument a<b, c, d>(e) calling a function/method named a with the type/template arguments b,c,d?
Now beyond this common foundation, most everything else is different between these languages:
Rust
In Rust the syntax for a variable declaration is let variableName: type = expr;, so x<T> a; couldn't possibly be a variable declaration because that doesn't match the syntax at all. In addition it's also not a valid expression statement (anymore) because comparison operators can't be chained (anymore).
So there's no ambiguity here or even a parsing difficulty. But what about function calls? For function calls, Rust avoided the ambiguity by simply choosing a different syntax to provide type arguments: instead of f<T>(x) the syntax is f::<T>(x). Since type arguments for function calls are optional when they can be inferred, this ugliness is thankfully not necessary very often.
So in summary: let a: x<T> = ...; is a variable declaration, f(a<b, c, d>(e)); calls f with three arguments and f(a::<b, c, d>(e)); calls a with three type arguments. Parsing is easy because all of these are sufficiently different to be distinguished with just one token of lookahead.
Java
In Java x<T> a; is in fact a valid variable declaration, but it is not a valid expression statement. The reason for that is that Java's grammar has a dedicated non-terminal for expressions that can appear as an expression statement and applications of relational operators (or any other non-assignment operators) are not matched by that non-terminal. Assignments are, but the left side of assignment expressions is similarly restricted. In fact, an identifier can only be the start of an expression statement if the next token is either a =, ., [ or (. So an identifier followed by a < can only be the start of a variable declaration, meaning we only need one token of lookahead to parse this.
Note that when accessing static members of a generic class, you can and must refer to the class without type arguments (i.e. FooClass.bar(); instead of FooClass<T>.bar()), so even in that case the class name would be followed by a ., not a <.
But what about generic method calls? Something like y = f<T>(x); could still run into the ambiguity because relational operators are of course allowed on the right side of =. Here Java chooses a similar solution as Rust by simply changing the syntax for generic method calls. Instead of object.f<T>(x) the syntax is object.<T>f(x) where the object. part is non-optional even if the object is this. So to call a generic method with an explicit type argument on the current object, you'd have to write this.<T>f(x);, but like in Rust the type argument can often be inferred, allowing you to just write f(x);.
So in summary x<T> a; is a variable declaration and there can't be expression statements that start with relational operations; in general expressions this.<T>f(x) is a generic method call and f<T>(x); is a comparison (well, a type error, actually). Again, parsing is easy.
C#
C# has the same restrictions on expression statements as Java does, so variable declarations aren't a problem, but unlike the previous two languages, it does allow f<T>(x) as the syntax for function calls. In order to avoid ambiguities, relational operators need to be parenthesized when used in a way that could also be valid call of a generic function. So the expression f<T>(x) is a method call and you'd need to add parentheses f<(T>(x)) or (f<T)>(x) to make it a comparison (though actually those would be type errors because you can't compare booleans with < or >, but the parser doesn't care about that) and similarly f(a<b, c, d>(e)) calls a generic method named a with the type arguments b,c,d whereas f((a<b), c, (d<e)) would involve two comparisons (and you can in fact leave out one of the two pairs of parentheses).
This leads to a nicer syntax for method calls with explicit type arguments than in the previous two languages, but parsing becomes kind of tricky. Considering that in the above example f(a<b, c, d>(e)) we can actually place an arbitrary number of arguments before d>(e) and a<b is a perfectly valid comparison if not followed by d>(e), we actually need an arbitrary amount of lookahead, backtracking or non-determinism to parse this.
So in summary x<T> a; is a variable declaration, there is no expression statement that starts with a comparison, f<T>(x) is a method call expression and (f<T)>(x) or f<(T>(x)) would be (ill-typed) comparisons. It is impossible to parse C# with menhir.
C++
In C++ a < b; is a valid (albeit useless) expression statement, the syntax for template function calls with explicit template arguments is f<T>(x) and a<b>c can be a perfectly valid (even well-typed) comparison. So statements like a<b>c; and expressions like a<b>(c) are actually ambiguous without additional information. Further, template arguments in C++ don't have to be types. That is, Foo<42> x; or even Foo<c> x; where c is defined as const int x = 42;, for example, could be perfectly valid instantiations of the Foo template if Foo is defined to take an integer as a template argument. So that's a bummer.
To resolve this ambiguity, the C++ grammar refers to the rule template-name instead of identifier in places where the name of a template is expected. So if we treated these as distinct entities, there'd be no ambiguity here. But of course template-name is defined simply as template-name: identifier in the grammar, so that seems pretty useless, ... except that the standard also says that template-name should only be matched when the given identifier names a template in the current scope. Similarly it says that identifiers should only be interpreted as variable names when they don't refer to a template (or type name).
Note that, unlike the previous three languages, C++ requires all types and templates to be declared before they can be used. So when we see the statement a<b>c;, we know that it can only be a template instantiation if we've previously parsed a declaration for a template named a and it is currently in scope.
So, if we keep track of scopes while parsing, we can simply use if-statements to check whether the name a refers to a previously parsed template or not in a hand-written parser. In parser generators that allow semantic predicates, we can do the same thing. Doing this does not even require any lookahead or backtracking.
But what about parser generators like yacc or menhir that don't support semantic predicates? For these we can use something known as the lexer hack, meaning we make the lexer generate different tokens for type names, template names and ordinary identifiers. Then we have a nicely unambiguous grammar that we can feed our parser generator. Of course the trick is getting the lexer to actually do that. In order to accomplish that, we need to keep track of which templates and types are currently in scope using a symbol table and then access that symbol table from the lexer. We'll also need to tell the lexer when we're reading the name of a definition, like the x in int x;, because then we want to generate a regular identifier even if a template named x is currently in scope (the definition int x; would shadow the template until the variable goes out of scope).
This same approach is used to resolve the casting ambiguity (is (T)(x) a cast of x to type T or a function call of a function named T?) in C and C++.
So in summary, foo<T> a; and foo<T>(x) are template instantiations if and only if foo is a template. Parsing's a bitch, but possible without arbitrary lookahead or backtracking and even using menhir when applying the lexer hack.
AFAIK C++'s template syntax is a well-known example of real-world non-LR grammar. Strictly speaking, it is not LR(k) for any finite k... So C++ parsers are usually hand-written with hacks (like clang) or generated by a GLR grammar (LR with branching). So in theory it is impossible to implement a complete C++ parser in Menhir, which is LR.
However even the same syntax for generics can be different. If generic types and expressions involving comparison operators never appear under the same context, the grammar may still be LR compatible. For example, consider the rust syntax for variable declaration (for this part only):
let x : Vec<T> = ...
The : token indicates that a type, rather than an expression follows, so in this case the grammar can be LR, or even LL (not verified).
So the final answer is, it depends. But for the C++ case it should be impossible to implement the syntax in Menhir.

What's the difference between C++ "type deduction" and Haskell "type inference"?

In English semantics, does "type deduction" equal to "type inferring"?
I'm not sure if
this is just an idiom preference chosen by different language designers, or
there's computer science that tells a strict "type deduction" definition,
which is not "type inference"?
Thanks.
The C++ specification and working drafts use 'type deduce' extensively in reference to the type of expressions that don't have an type declaration as reference; for example this working draft on concepts uses it when talking about auto-declared variables and I remember lots of books using it when talking about templates way way back when I had to learn – and then subsequently forget most of – C++. Type inference, however, has its own Wikipedia page and is also the name of a significant field of study in programming-language theory. If you say type inference, people will immediately think of modern typed functional programming languages. You can even use it as a ruler to compare languages; some might say that their language X or their library Y is easier to type inference and is therefore better or friendlier.
I would say that type inference is the more specific, more precise, and more widely-used term. Type deduction as a phrase probably only holds cachet in the C++ community. The terms are close cousins, but the context they've been used in have given them dictional shades of color.
As mentioned, type inference is studied for years in theory. It is also a common feature provided by several programming languages, not only Haskell.
On the other hand, type deduction is the general name of several processes defined in the C++ specification, including return type deduction, placeholder type deduction, and the origin of them, template (type) argument deduction. It is about to identify the type implied by a term (within a C++ simple-type-specifier or template-argument) with yet unknown to-be-deduced type. It is similar to type inference, which is about typing -- determining the type of a term, but more restricted.
The major differences are:
Type deduction is the process to get the type of terms with some restricted form. Normally a C++ expression is typed without type deduction rules. In most cases the static type of an expression is determined by specific semantic rules forming the expression. Type inference can be used instead as the one truly general method of static typing.
As the term "type" has more restricted meaning in C++ compared to that in type theory, programming language theory and many contemporary programming languages, the domain of the result is also more restricted. Namely, type deduction must deduce a well-formed C++ type, not a higher-order type (which is not expressible as a C++ type).
These restrictions would not likely change without a massive redesigning of the type system of C++, so they can be essential.

Is std::vector<T> a `user-defined type`?

In 17.6.4.2.1/1 and 17.6.4.2.1/2 of the current draft standard restrictions are placed on specializations injected by users into namespace std.
The behavior of a C
++
program is undefined if it adds declarations or definitions to namespace
std
or to a
namespace within namespace
std
unless otherwise specified. A program may add a template specialization
for any standard library template to namespace
std
only if the declaration depends on a user-defined type
and the specialization meets the standard library requirements for the original template and is not explicitly
prohibited.
I cannot find where in the standard the phrase user-defined type is defined.
One option I have heard claimed is that a type that is not std::is_fundamental is a user-defined type, in which case std::vector<int> would be a user-defined type.
An alternative answer would be that a user-defined type is a type that a user defines. As users do not define std::vector<int>, and std::vector<int> is not dependent on any type a user defines, std::vector<int> is not a user-defined type.
A practical problem this impacts is "can you inject a specialization for std::hash for std::tuple<Ts...> into namespace std? Being able to do so is somewhat convenient -- the alternative is to create another namespace where we recursively build our hash for std::tuple (and possibly other types in std that do not have hash support), and if and only if we fail to find a hash in that namespace do we fall back on std.
However, if this is legal, then if and when the standard adds a hash specialization for std::tuple to namespace std, code that specialized it already would be broken, creating a reason not to add such specializations in the future.
While I am talking about std::vector<int> as a concrete example, I am trying to ask if types defined in std are ever user-defined type s. A secondary question is, even if not, maybe std::tuple<int> becomes a user-defined type when used by a user (this gets slippery: what then happens if something inside std defines std::tuple<int>, and you partial-specialize hash for std::tuple<Ts...>).
There is currently an open defect on this problem.
Prof. Stroustrup is very clear that any type that is not built-in is user-defined. See the second paragraph of section 9.1 in Programming Principles and Practice Using C++.
He even specifically calls out “standard library types” as an example of user-defined types. In other words, a user-defined type is any compound type.
Source
The article explicitly mentions that not everyone seems to agree, but this is IMHO mostly wishful thinking and not what the standard (and Prof. Stroustrup) are actually saying, only what some people want to read into it.
When Clause 17 says "user-defined" it means "a type not defined in the standard" so std::vector<int> is not user-defined, neither is std::string, so you cannot specialize std::vector<int> or std::vector<std::string>. On the other hand, struct MyClass is user-defined, because it's not a type defined in the standard, so you can specialize std::vector<MyClass>.
This is not the same meaning of "user-defined" used in clauses 1-16, and that difference is confusing and silly. There is a defect report for this, with some discussion recorded that basically says "yes, the library uses the wrong term, but we don't have a better one".
So the answer to your question is "it depends". If you're talking to a C++ compiler implementor or a core language expert, std::vector<int> is definitely a user-defined type, but if you're talking to a standard library implementor, it is not. More precisely, it's not user-defined for the purposes of 17,6.4.2.1.
One way to look at it is that the standard library is "user code" as far as the core language is concerned. But the standard library has a different idea of "users" and considers itself to be part of the implementation, and only things that aren't part of the library are "user-defined".
Edit: I have proposed changing the library Clauses to use a new term, "program-defined", which means something defined in your program (as opposed to UDTs defined in the standard, such as std::string).
As users do not define std::vector<int>, and std::vector<int> is not dependent on any type a user defines, std::vector<int> is not a user-defined type.
The logical counter argument is that users do define std::vector<int>. You see std::vector is a class template and as such has no direct representation in binary code.
In a sense it gets it binary representation through the instantiation of a type, so the very action of declaring a std::vector<int> object is what gives "soul" to the template (pardon the phrasing). In a program where noone uses a std::vector<int> this data type does not exist.
On the other hand, following the same argument, std::vector<T> is not a user defined type, it is not even a type, it does not exist; only if we want to (instantiate a type), it will mandate how a structure will be layed out but until then we can only argue about it in terms of structure, design, properties and so on.
Note
The above argument (about templates being not code but ... well templates for code) may seem a bit superficial but draws it's logic, from Mayer's introduction in A. Alexandrescu's book Modern C++ Design. The relative quote there, goes like this :
Eventually, Andrei turned his attention to the development of template-based implementations of popular language idioms and design patterns, especially the GoF[*] patterns. This led to a brief skirmish with the Patterns community, because one of their fundamental tenets is that patterns cannot be represented in code. Once it became clear that Andrei was automating the generation of pattern implementations rather than trying to encode patterns themselves, that objection was removed, and I was pleased to see Andrei and one of the GoF (John Vlissides) collaborate on two columns in the C++ Report focusing on Andrei's work.
The draft standard contrasts fundamental types with user-defined types in a couple of (non-normative) places.
The draft standard also uses the term "user-defined" in other contexts, referring to entities created by the programmer or defined in the standard library. Examples include user-defined constructor, user-defined operator and user-defined conversion.
These facts allow us, absent other evidence, to tentatively assume that the intent of the standard is that user-defined type should mean compound type, according to historical usage. Only an explicit clarification in a future standard document can definitely resolve the issue.
Note that the historical usage is not clear on types like int* or struct foo* or void(*)(struct foo****). They are compound, but should they (or some of them) be considered user-defined?

What are the differences between concepts and template constraints?

I want to know what are the semantic differences between the C++ full concepts proposal and template constraints (for instance, constraints as appeared in Dlang or the new concepts-lite proposal for C++1y).
What are full-fledged concepts capable of doing than template constraints cannot do?
The following information is out of date. It needs to be updated according to the latest Concepts Lite draft.
Section 3 of the constraints proposal covers this in reasonable depth.
The concepts proposal has been put on the back burners for a short while in the hope that constraints (i.e. concepts-lite) can be fleshed out and implemented in a shorter time scale, currently aiming for at least something in C++14. The constraints proposal is designed to act as a smooth transition to a later definition of concepts. Constraints are part of the concepts proposal and are a necessary building block in its definition.
In Design of Concept Libraries for C++, Sutton and Stroustrup consider the following relationship:
Concepts = Constraints + Axioms
To quickly summarise their meanings:
Constraint - A predicate over statically evaluable properties of a type. Purely syntactic requirements. Not a domain abstraction.
Axioms - Semantic requirements of types that are assumed to be true. Not statically checked.
Concepts - General, abstract requirements of algorithms on their arguments. Defined in terms of constraints and axioms.
So if you add axioms (semantic properties) to constraints (syntactic properties), you get concepts.
Concepts-Lite
The concepts-lite proposal brings us only the first part, constraints, but this is an important and necessary step towards fully-fledged concepts.
Constraints
Constraints are all about syntax. They give us a way of statically discerning properties of a type at compile-time, so that we can restrict the types used as template arguments based on their syntactic properties. In the current proposal for constraints, they are expressed with a subset of propositional calculus using logical connectives like && and ||.
Let's take a look at a constraint in action:
template <typename Cont>
requires Sortable<Cont>()
void sort(Cont& container);
Here we are defining a function template called sort. The new addition is the requires clause. The requires clause gives some constraints over the template arguments for this function. In particular, this constraint says that the type Cont must be a Sortable type. A neat thing is that it can be written in a more concise form as:
template <Sortable Cont>
void sort(Cont& container);
Now if you attempt to pass anything that is not considered Sortable to this function, you'll get a nice error that immediately tells you that the type deduced for T is not a Sortable type. If you had done this in C++11, you'd have had some horrible error thrown from inside the sort function that makes no sense to anybody.
Constraints predicates are very similar to type traits. They take some template argument type and give you some information about it. Constraints attempt to answer the following kinds of questions about type:
Does this type have such-and-such operator overloaded?
Can these types be used as operands to this operator?
Does this type have such-and-such trait?
Is this constant expression equal to that? (for non-type template arguments)
Does this type have a function called yada-yada that returns that type?
Does this type meet all the syntactic requirements to be used as that?
However, constraints are not meant to replace type traits. Instead, they will work hand in hand. Some type traits can now be defined in terms of concepts and some concepts in terms of type traits.
Examples
So the important thing about constraints is that they do not care about semantics one iota. Some good examples of constraints are:
Equality_comparable<T>: Checks whether the type has == with both operands of that same type.
Equality_comparable<T,U>: Checks whether there is a == with left and right operands of the given types
Arithmetic<T>: Checks whether the type is an arithmetic type.
Floating_point<T>: Checks whether the type is a floating point type.
Input_iterator<T>: Checks whether the type supports the syntactic operations that an input iterator must support.
Same<T,U>: Checks whether the given type are the same.
You can try all this out with a special concepts-lite build of GCC.
Beyond Concepts-Lite
Now we get into everything beyond the concepts-lite proposal. This is even more futuristic than the future itself. Everything from here on out is likely to change quite a bit.
Axioms
Axioms are all about semantics. They specify relationships, invariants, complexity guarantees, and other such things. Let's look at an example.
While the Equality_comparable<T,U> constraint will tell you that there is an operator== that takes types T and U, it doesn't tell you what that operation means. For that, we will have the axiom Equivalence_relation. This axiom says that when objects of these two types are compared with operator== giving true, these objects are equivalent. This might seem redundant, but it's certainly not. You could easily define an operator== that instead behaved like an operator<. You'd be evil to do that, but you could.
Another example is a Greater axiom. It's all well and good to say two objects of type T can be compared with > and < operators, but what do they mean? The Greater axiom says that iff x is greater then y, then y is less than x. The proposed specification such an axiom looks like:
template<typename T>
axiom Greater(T x, T y) {
(x>y) == (y<x);
}
So axioms answer the following types of questions:
Do these two operators have this relationship with each other?
Does this operator for such-and-such type mean this?
Does this operation on that type have this complexity?
Does this result of that operator imply that this is true?
That is, they are concerned entirely with the semantics of types and operations on those types. These things cannot be statically checked. If this needs to be checked, a type must in some way proclaim that it adheres to these semantics.
Examples
Here are some common examples of axioms:
Equivalence_relation: If two objects compare ==, they are equivalent.
Greater: Whenever x > y, then y < x.
Less_equal: Whenever x <= y, then !(y < x).
Copy_equality: For x and y of type T: if x == y, a new object of the same type created by copy construction T{x} == y and still x == y (that is, it is non-destructive).
Concepts
Now concepts are very easy to define; they are simply the combination of constraints and axioms. They provide an abstract requirement over the syntax and semantics of a type.
As an example, consider the following Ordered concept:
concept Ordered<Regular T> {
requires constraint Less<T>;
requires axiom Strict_total_order<less<T>, T>;
requires axiom Greater<T>;
requires axiom Less_equal<T>;
requires axiom Greater_equal<T>;
}
First note that for the template type T to be Ordered, it must also meet the requirements of the Regular concept. The Regular concept is a very basic requirements that the type is well-behaved - it can be constructed, destroyed, copied and compared.
In addition to those requirements, the Ordered requires that T meet one constraint and four axioms:
Constraint: An Ordered type must have an operator<. This is statically checked so it must exist.
Axioms: For x and y of type T:
x < y gives a strict total ordering.
When x is greater than y, y is less than x, and vice versa.
When x is less than or equal to y, y is not less than x, and vice versa.
When x is greater than or equal to y, y is not greater than x, and vice versa.
Combining constraints and axioms like this gives you concepts. They define the syntactic and semantic requirements for abstract types for use with algorithms. Algorithms currently have to assume that the types used will support certain operations and express certain semantics. With concepts, we'll be able to ensure that requirements are met.
In the latest concepts design, the compiler will only check that the syntactic requirements of a concept are fulfilled by the template argument. The axioms are left unchecked. Since axioms denote semantics that are not statically evaluable (or often impossible to check entirely), the author of a type would have to explicitly state that their type meets all the requirements of a concept. This was known as concept mapping in previous designs but has since been removed.
Examples
Here are some examples of concepts:
Regular types are constructable, destructable, copyable, and can be compared.
Ordered types support operator<, and have a strict total ordering and other ordering semantics.
Copyable types are copy constructable, destructable, and if x is equal to y and x is copied, the copy will also compare equal to y.
Iterator types must have associated types value_type, reference, difference_type, and iterator_category which themselves must meet certain concepts. They must also support operator++ and be dereferenceable.
The Road to Concepts
Constraints are the first step towards a full concepts feature of C++. They are a very important step, because they provide the statically enforceable requirements of types so that we can write much cleaner template functions and classes. Now we can avoid some of the difficulties and ugliness of std::enable_if and its metaprogramming friends.
However, there are a number of things that the constraints proposal does not do:
It does not provide a concept definition language.
Constraints are not concept maps. The user does not need to specifically annotate their types as meeting certain constraints. They are statically checked used simple compile-time language features.
The implementations of templates are not constrained by the constraints on their template arguments. That is, if your function template does anything with an object of constrained type that it shouldn't do, the compiler has no way to diagnose that. A fully featured concepts proposal would be able to do this.
The constraints proposal has been designed specifically so that a full concepts proposal can be introduced on top of it. With any luck, that transition should be a fairly smooth ride. The concepts group are looking to introduce constraints for C++14 (or in a technical report soon after), while full concepts might start to emerge sometime around C++17.
See also "what's 'lite' about concepts lite" in section 2.3 of the recent (March 12) Concepts telecon minutes and record of discussion, which were posted the same day here: http://isocpp.org/blog/2013/03/new-paper-n3576-sg8-concepts-teleconference-minutes-2013-03-12-herb-sutter .
My 2 cents:
The concepts-lite proposal is not meant to do "type checking" of template implementation. I.e., Concepts-lite will ensure (notionally) interface compatibility at the template instantiation site. Quoting from the paper: "concepts lite is an extension of C++ that allows the use of predicates to constrain template arguments". And that's it. It does not say that template body will be checked (in isolation) against the predicates. That probably means there is no first-class notion of archtypes when you are talking about concepts-lite. archtypes, if I remember correctly, in concepts-heavy proposal are types that offer no less and no more to satisfy the implementation of the template.
concepts-lite use glorified constexpr functions with a bit of syntax trick supported by the compiler. No changes in the lookup rules.
Programmers are not required to write concepts maps.
Finally, quoting again "The constraints proposal does not directly address the specification or use of semantics; it is targeted only at checking syntax." That would mean axioms are not within the scope (so far).

Difference between decltype and typeof?

Two question regarding decltype and typeof:
Is there any difference between the decltype and typeof operators?
Does typeof become obsolete in C++11?
There is no typeof operator in c++. While it is true that such a functionality has been offered by most compilers for quite some time, it has always been a compiler specific language extension. Therefore comparing the behaviour of the two in general doesn't make sense, since the behaviour of typeof (if it even exists) is extremely platform dependent.
Since we now have a standard way of getting the type of a variable/expression, there is really no reason to rely on non portable extensions, so I would say it's pretty much obsolete.
Another thing to consider is that if the behaviour is of typeof isn't compatible with decltype for a given compiler it is possible that the typeof extension won't get much development to encompass new language features in the future (meaning it might simply not work with e.g. lambdas). I don't know whether or not that is currently the case, but it is a distinct possibility.
The difference between the two is that decltype always preserves references as part of the information, whereas typeof may not. So...
int a = 1;
int& ra = a;
typeof(a) b = 1; // int
typeof(ra) b2 = 1; // int
decltype(a) b3; // int
decltype(ra) b4 = a; // reference to int
The name typeof was the preferred choice (consistent with sizeof and alignof, and the name already used in extensions), but as you can see in the proposal N1478, concern around compatibility with existing implementations dropping references led them to giving it a distinct name.
"We use the operator name typeof when referring to the mechanism for querying a type of an expression in general. The decltype operator refers to the proposed variant of typeof. ... Some compiler vendors (EDG, Metrowerks, GCC) provide a typeof operator as an extension with reference-dropping semantics. As described in Section 4, this appears to be ideal for expressing the type of variables. On the other hand, the reference-dropping semantics fails to provide a mechanism for exactly expressing the return types of generic functions ... In this proposal, the semantics of the operator that provides information of the type of expressions reflects the declared type. Therefore, we propose the operator to be named decltype."
J. Jarvi, B. Stroustrup, D. Gregor, J. Siek: Decltype and auto. N1478/03-0061.
So it's incorrect to say that decltype completely obviated typeof (if you want reference dropping semantics, then the typeof extension in those compilers still has use), but rather, typeof was largely obviated by it plus auto, which does drop references and replaces uses where typeof was used for variable inference.
p.s. 2023-01-03 C23 (not C++) will evidently officially get typeof in N2927.
For legacy code, I have been using successfully the following:
#include <type_traits>
#define typeof(x) std::remove_reference<decltype((x))>::type
typeof has not been standardized, although it was implemented by several compiler vendors, e.g., GCC. It became obsolete with decltype.
A good explanation for the shortcomings of typeof can be found in this related answer.
Well, typeof is a non-standard GNU C extension that you can use in GNU C++ because GCC allows you to use features from other languages in another (not always though), so they really shouldn't be compared.
Of course other non-standard extensions may exist for other compilers, but GCC is definitely the most widely documented implementation.
To answer the question: it can't be obsolete if it was never a feature.
If you want to compare the merits of either method in C++ then there is semantically no difference unless you're dealing with references. You should use decltype because it is portable and standards conforming.