Introduction
The standard specifies that each concept is related to two predicates:
predicate "is statisfied by": a concept is satisfied by a sequence of template argument when it evaluates to true. This is almost a syntactic check.
predicate "is modeled by": A sequence Args of template arguments is said to model a concept C if Args satisfies C ([temp.constr.decl]) and meets all semantic requirements (if any) given in the specification of C. [res.on.requirements]
For some concepts, the requirements that makes a satisfied concept modeled are clearly expressed. Example [concept.assignable]
LHS and RHS model assignable_from<LHS, RHS> only if
addressof(lhs = rhs) == addressof(lcopy)
[...]
But I wonder if the syntactic requirements also implicitly implies semantic requirements.
Question
Does the syntactic predicates implicitly imply requirement for the concept to be modeled ?
I see two kind of implicit requirement:
The concept is satisfied because syntactically checked expressions are unevaluated expressions and such expressions would result in the program being ill-formed if those expressions were not unevaluated expressions.
The concept is satisfied because syntactically checked expressions are not evaluated but evaluation of those expression would result in the program having undefined behavior.
Examples
For example, let's consider the default_initializable concept, defined here: [concept.default.init].
default_initializable is satisfied by A<int> but the program is ill-formed if a variable of type A<int> is default-initialized (demo):
template <class T>
struct A {
A() {
f(T{});
}
};
static_assert (default_initializable <A<int>>); // A<int> satisfies default_initializable
A<int> a{}; //compile time error: f not declared in this scope
default_initializable is satisfied by A but default-initialization of A result in undefined behavior (when the default-initialization is not preceded by a zero-initialization) (demo):
struct A {
int c;
A() {
c++;
}
};
static_assert (default_initializable <A>); // A satisfies default_initializable
auto p = new A; //undefined behavior: indeterminate-value as operand of operator ++
a concept is satisfied by a sequence of template argument when it evaluates to true. This is almost a syntactic check.
No, it is not "almost" anything: it is a syntactic check. The constraints specified by a requires clause (for example) verify that a specific syntax is legal syntax for that type. This is all that "satisfying a concept" means.
Does the syntactic predicates implicitly imply requirement for the concept to be modeled?
... no. If satisfying a concept also implied modeling the concept, then the standard wouldn't need different terms for these.
The point of having such a distinction is the recognition that the concept language feature can't specify every requirement that concepts as a concept should encapsulate. So satisfying-a-concept is just the language part, while modelling-a-concept includes things that the language can't do.
But that question is kind of separate from what your two examples show. Your examples represent the difference between "valid syntax" and "can be compiled/executed". Satisfying a concept only cares about the former. And modelling a concept only cares about the latter to the extent that said semantic behavior is explicitly specified.
There is nothing in the standard about implicit semantic requirements. There is no statement to the effect of "all expressions/statements in a concept must be able to be compiled and/or executed in order for it to be modeled". Nor is it intended to.
However much we try to pretend it's more than this, concepts as it exists in C++20 is nothing more than a more convenient mechanism for performing SFINAE. SFINAE can't test compilable/executable validity of the contents of some expression, so neither can concepts. And neither does concepts attempt to pretend that it can.
Related
The response of LWG 484 shows "convertible" is "well-defined in the core". Which clauses does it refer to?
I have this question because currently the requirements of is_convertible are carefully defined in [meta.rel]/5, which are not in the core. Although it has clearly noted the technical benefits, the coexistence with other styles seems confusing in the overall library clauses. So there are more questions about the general use.
Do the meaning of requirements implied by is_convertible and the "convertible to" (without the code font) wording in the library clauses exactly the same?
If true, why both are used?
If not, is there any guideline or rationale to differentiate these subtle use cases?
Convertible to refers to something that can be a result of a standard conversion sequence defined in [conv] and since is_convertible is defined in terms of what conversion the core language allows during the copy-initialization of a return value, I don't see any difference.
Edit: Implicit conversion sequence might be another useful term not mentioned in the question. It is defined in [over.best.ics]
The "'convertible' is well-defined in core" statement was added at some point between 2004 and 2009, though that page doesn't say exactly when. So I looked at the C++03 and C++11 standards, and it appears that neither actually has a definition of the term "convertible". However, I think we can infer that "convertible" is generally intended to mean "implicitly convertible", and the meaning of "implicitly convertible" is given by [conv] ([conv.general] in newer editions of the standard).
For example C++11 [conv]/3 says:
An expression e can be implicitly converted to a type T if and only if the declaration T t=e; is well-formed, for some invented temporary variable t (8.5). ...
However, it should be noted that the definition of "implicitly converted" uses an expression, not a type, as the source, whereas "convertible to" in the library sometimes has a type as the source. In that case, "T is convertible to U" should be interpreted as "all expressions of type T can be implicitly converted to type U".
There are other plausible meanings of "convertible", for example:
The first sentence of [expr.static.cast] says "The result of the expression static_cast<T>(v) is the result of converting the expression v to type T". This is more powerful than an implicit conversion. It can, for example, convert void* to int*, or from an enum type to an integral type.
The sections of the C++ standard that specify the behaviour of (T) cast-expression and T ( expression-list(opt) ) are titled "Explicit type conversion (cast notation)" and "Explicit type conversion (functional notation)". These conversions are much more powerful than implicit conversions; they can call explicit constructors and explicit conversion operators, and even perform static_casts, reinterpret_casts, and const_casts (and even bypass access control in some cases).
But if we look at how the term "convertible" is actually used in the standard, it seems that it must be intended to mean "implicitly convertible". For example, C++11 [allocator.uses.trait]/1 says:
Remark: automatically detects whether T has a nested allocator_type that is convertible from Alloc. Meets the BinaryTypeTrait requirements (20.9.1). The implementation shall provide a definition
that is derived from true_type if a type T::allocator_type exists and is_convertible<Alloc, T::allocator_type>::value != false, otherwise it shall be derived from false_type. [...]
Evidently the intent here was that "convertible" means the same thing as what is_convertible tests for, that is, implicit convertibility (with a caveat that will be discussed below).
Another example is that in C++11 [container.requirements.general], Table 96, it is stated that for a container X, the type X::iterator shall be "convertible to X::const_iterator". The general understanding of this is that it means implicitly convertible, i.e., this is well-formed:
void foo(std::vector<int>& v) {
std::vector<int>::const_iterator it = v.begin();
}
Imagine if you had to use a cast, or even just direct-initialization notation, to perform this conversion. Could LWG have meant to use the word "convertible" to with a meaning that would make such an implementation conforming? It seems very doubtful.
Wait, does "convertible to" have the same meaning as what the standard type trait std::is_convertible tests for? Assuming that "convertible to" means "implicitly convertible to" with the caveat above, strictly speaking, no, because there is the edge case where std::is_convertible<void, void>::value is true but void is not implicitly convertible to itself (since the definition of a void variable is ill-formed). I think this is the only edge case (also including the other 15 variations involving cv-qualified void), although I'm not 100% sure about that.
However, this doesn't explain why std::is_convertible is sometimes used in the library specification. Actually, it is only used once in C++11 (other than in its own definition), namely in [allocator.uses.trait]/1, and there, it doesn't seem that there is any intent for it to mean anything other than "implicitly convertible", since it is irrelevant what happens when Alloc is void (as it is mentioned elsewhere that non-class types are not permitted as allocator types). The fact that std::is_convertible is even mentioned here seems like it's just a mild case of an implementation detail leaking into the specification. This is probably also the case for most other mentions that appear in newer editions of the standard.
C++20 introduced a concept called std::convertible_to, where std::convertible_to<From, To> is only modelled if
From is implicitly convertible to To, and
static_cast<To>(e) is well-formed, where e has type std::add_rvalue_reference_t<From> (i.e., From is also explicitly convertible to To via static_cast), and
some other requirements hold, which I won't get into here.
It's reasonable to ask whether some uses of "convertible to" actually mean std::convertible_to, given the similar names, and other considerations. In particular, the one mentioned by LWG 484 itself. C++20 [input.iterators] Table 85 states that if X meets the Cpp17InputIterator requirements for the value type T, then *a must be "convertible to T", where a has type X or const X.
This is almost certainly another case where "convertible to", as used in the library specification, implies "implicitly convertible to", but I think that it also should imply "explicitly convertible to", where the implicit and explicit conversions give the same results. We can construct pathological iterator types for which they don't. See this Godbolt link for an example.
I think both users and standard library implementers should feel free to assume that if a function accepts input iterators, it can assume that explicit conversions via static_cast or direct-initialization notation from *a to T can be used with the same result as implicit conversions. (Such explicit conversions may, for example, be used when calling function templates, if we want the deduced type of the function template specialization's argument to be T& or const T& and not some other U& or const U& where U is the actual type of *a.) In other words, I think "convertible to" in the input iterator requirements really means something like std::convertible_to, not std::is_convertible_v. I would be surprised if standard library implementations truly only relied on implicit convertibility in this case, and not both implicit and explicit convertibility (with the two being equivalent). (They can implement the library in an extremely careful way that only uses implicit conversions in cases where one would normally use an explicit conversion, but I'm not sure if they actually do. And even if they are indeed that clever, it is not reasonable to expect the same from anyone who is not a standard library implementer.)
But that's an issue for LWG to resolve. Notice that LWG 484 was reopened in 2009 and remains open to this day. It's probably a lot of work to go through all the uses of "convertible to" and "convertible from" and determine which ones should mean "implicitly convertible to" and "implicitly convertible from" and which ones should mean std::convertible_to (as well as if any uses of "implicitly convertible to" need to be changed to std::convertible_to).
According to cppreference, the trait std::is_literal_type is deprecated in C++17. The question is why and what is the preferred replacement for the future to check whether a type is a literal type.
As stated in P0174:
The is_literal type trait offers negligible value to generic code, as what is really needed is the ability to know that a specific construction would produce constant initialization. The core term of a literal type having at least one constexpr constructor is too weak to be used meaningfully.
Basically, what it's saying is that there's no code you can guard with is_literal_type_v and have that be sufficient to ensure that your code actually is constexpr. This isn't good enough:
template<typename T>
std::enable_if_t<std::is_literal_type_v<T>, void> SomeFunc()
{
constexpr T t{};
}
There's no guarantee that this is legal. Even if you guard it with is_default_constructible<T> that doesn't mean that it's constexpr default constructible.
What you would need is an is_constexpr_constructible trait. Which does not as of yet exist.
However, the (already implemented) trait does no harm, and allows compile-time introspection for which core-language type-categories a given template parameter might satisfy. Until the Core Working Group retire the notion of a literal type, the corresponding library trait should be preserved.
The next step towards removal (after deprecation) would be to write a paper proposing to remove the term from the core language while deprecating/removing the type trait.
So the plan is to eventually get rid of the whole definition of "literal types", replacing it with something more fine-grained.
I seem to recall hearing vague comments from a few reliable sources (i.e. committee members speaking in non-official channels) that C type-generic expressions will not be added to C++ because they cannot be.
As far as I can tell, type-generic expressions are very limited compared to C++ templates and overloading, but there is no potential for interaction that would need to be defined as a special case.
A type-generic expression consists of a controlling expression, and a series of "associations" between types and subexpressions. A subexpression is chosen based on the static type of the controlling expression and the types listed for the subexpressions, and it is substituted in place of the TGE. Matching is based on the C concept of type compatibility, which as far as I can tell is equivalent to C++ identity of types with extern linkage under the one-definition rule (ODR).
It would be nice if a derived class controlling expression would select a base class association in C++, but since C doesn't have inheritance, such nicety isn't needed for cross-compatibility. Is this considered to be a stumbling block anyway?
Edit: As for more specific details, C11 already provides for preserving the value category (lvalue-ness) of the chosen subexpression, and seems to require that the TGE is a constant expression (of whatever category) as long as all its operands are, including the controlling expression. This is probably a C language defect. In any case, C++14 defines constant expressions in terms of what may be potentially evaluated, and the TGE spec already says that non-chosen subexpressions are unevaluated.
The point is that the TGE principle of operation seems simple enough to be transplanted without ever causing trouble later.
As for why C++ TGE's would be useful, aside from maximizing the intersection of C and C++, they could be used to essentially implement static_if, sans the highly controversial conditional declaration feature. I'm not a proponent of static_if, but "there is that."
template< typename t >
void f( t q ) {
auto is_big = _Generic( std::integral_constant< bool, sizeof q >= 4 >(),
std::true_type: std::string( "whatta whopper" ),
std::false_type: "no big deal"
);
auto message = _Generic( t, double: "double", int: 42, default: t );
std::cout << message << " " << is_big << '\n';
}
"Cannot be" is perhaps too strong. "Uncertain whether it's possible" is more likely.
If this feature were to be added to C++, it should be fully specified including all interactions with existing C++ features, many of which were not designed with _Generic in mind. E.g. typeid doesn't exist in C, but should work in C++. Can you use a _Generic expression as a constexpr? As a template parameter? Is it an lvalue ? Can you take its address and assign that to a function pointer, and get overload resolution? Can you do template argument deduction? Can you use it with auto? Etc.
Another complication is that the primary usecase for _Generic is for macro's, which don't play nice with C++ namespaces. The acos example from tgmath is a clear example. C++ already banned C's standard macro's and required them to be functions, but that won't work with tgmath.h.
I want to know what are the semantic differences between the C++ full concepts proposal and template constraints (for instance, constraints as appeared in Dlang or the new concepts-lite proposal for C++1y).
What are full-fledged concepts capable of doing than template constraints cannot do?
The following information is out of date. It needs to be updated according to the latest Concepts Lite draft.
Section 3 of the constraints proposal covers this in reasonable depth.
The concepts proposal has been put on the back burners for a short while in the hope that constraints (i.e. concepts-lite) can be fleshed out and implemented in a shorter time scale, currently aiming for at least something in C++14. The constraints proposal is designed to act as a smooth transition to a later definition of concepts. Constraints are part of the concepts proposal and are a necessary building block in its definition.
In Design of Concept Libraries for C++, Sutton and Stroustrup consider the following relationship:
Concepts = Constraints + Axioms
To quickly summarise their meanings:
Constraint - A predicate over statically evaluable properties of a type. Purely syntactic requirements. Not a domain abstraction.
Axioms - Semantic requirements of types that are assumed to be true. Not statically checked.
Concepts - General, abstract requirements of algorithms on their arguments. Defined in terms of constraints and axioms.
So if you add axioms (semantic properties) to constraints (syntactic properties), you get concepts.
Concepts-Lite
The concepts-lite proposal brings us only the first part, constraints, but this is an important and necessary step towards fully-fledged concepts.
Constraints
Constraints are all about syntax. They give us a way of statically discerning properties of a type at compile-time, so that we can restrict the types used as template arguments based on their syntactic properties. In the current proposal for constraints, they are expressed with a subset of propositional calculus using logical connectives like && and ||.
Let's take a look at a constraint in action:
template <typename Cont>
requires Sortable<Cont>()
void sort(Cont& container);
Here we are defining a function template called sort. The new addition is the requires clause. The requires clause gives some constraints over the template arguments for this function. In particular, this constraint says that the type Cont must be a Sortable type. A neat thing is that it can be written in a more concise form as:
template <Sortable Cont>
void sort(Cont& container);
Now if you attempt to pass anything that is not considered Sortable to this function, you'll get a nice error that immediately tells you that the type deduced for T is not a Sortable type. If you had done this in C++11, you'd have had some horrible error thrown from inside the sort function that makes no sense to anybody.
Constraints predicates are very similar to type traits. They take some template argument type and give you some information about it. Constraints attempt to answer the following kinds of questions about type:
Does this type have such-and-such operator overloaded?
Can these types be used as operands to this operator?
Does this type have such-and-such trait?
Is this constant expression equal to that? (for non-type template arguments)
Does this type have a function called yada-yada that returns that type?
Does this type meet all the syntactic requirements to be used as that?
However, constraints are not meant to replace type traits. Instead, they will work hand in hand. Some type traits can now be defined in terms of concepts and some concepts in terms of type traits.
Examples
So the important thing about constraints is that they do not care about semantics one iota. Some good examples of constraints are:
Equality_comparable<T>: Checks whether the type has == with both operands of that same type.
Equality_comparable<T,U>: Checks whether there is a == with left and right operands of the given types
Arithmetic<T>: Checks whether the type is an arithmetic type.
Floating_point<T>: Checks whether the type is a floating point type.
Input_iterator<T>: Checks whether the type supports the syntactic operations that an input iterator must support.
Same<T,U>: Checks whether the given type are the same.
You can try all this out with a special concepts-lite build of GCC.
Beyond Concepts-Lite
Now we get into everything beyond the concepts-lite proposal. This is even more futuristic than the future itself. Everything from here on out is likely to change quite a bit.
Axioms
Axioms are all about semantics. They specify relationships, invariants, complexity guarantees, and other such things. Let's look at an example.
While the Equality_comparable<T,U> constraint will tell you that there is an operator== that takes types T and U, it doesn't tell you what that operation means. For that, we will have the axiom Equivalence_relation. This axiom says that when objects of these two types are compared with operator== giving true, these objects are equivalent. This might seem redundant, but it's certainly not. You could easily define an operator== that instead behaved like an operator<. You'd be evil to do that, but you could.
Another example is a Greater axiom. It's all well and good to say two objects of type T can be compared with > and < operators, but what do they mean? The Greater axiom says that iff x is greater then y, then y is less than x. The proposed specification such an axiom looks like:
template<typename T>
axiom Greater(T x, T y) {
(x>y) == (y<x);
}
So axioms answer the following types of questions:
Do these two operators have this relationship with each other?
Does this operator for such-and-such type mean this?
Does this operation on that type have this complexity?
Does this result of that operator imply that this is true?
That is, they are concerned entirely with the semantics of types and operations on those types. These things cannot be statically checked. If this needs to be checked, a type must in some way proclaim that it adheres to these semantics.
Examples
Here are some common examples of axioms:
Equivalence_relation: If two objects compare ==, they are equivalent.
Greater: Whenever x > y, then y < x.
Less_equal: Whenever x <= y, then !(y < x).
Copy_equality: For x and y of type T: if x == y, a new object of the same type created by copy construction T{x} == y and still x == y (that is, it is non-destructive).
Concepts
Now concepts are very easy to define; they are simply the combination of constraints and axioms. They provide an abstract requirement over the syntax and semantics of a type.
As an example, consider the following Ordered concept:
concept Ordered<Regular T> {
requires constraint Less<T>;
requires axiom Strict_total_order<less<T>, T>;
requires axiom Greater<T>;
requires axiom Less_equal<T>;
requires axiom Greater_equal<T>;
}
First note that for the template type T to be Ordered, it must also meet the requirements of the Regular concept. The Regular concept is a very basic requirements that the type is well-behaved - it can be constructed, destroyed, copied and compared.
In addition to those requirements, the Ordered requires that T meet one constraint and four axioms:
Constraint: An Ordered type must have an operator<. This is statically checked so it must exist.
Axioms: For x and y of type T:
x < y gives a strict total ordering.
When x is greater than y, y is less than x, and vice versa.
When x is less than or equal to y, y is not less than x, and vice versa.
When x is greater than or equal to y, y is not greater than x, and vice versa.
Combining constraints and axioms like this gives you concepts. They define the syntactic and semantic requirements for abstract types for use with algorithms. Algorithms currently have to assume that the types used will support certain operations and express certain semantics. With concepts, we'll be able to ensure that requirements are met.
In the latest concepts design, the compiler will only check that the syntactic requirements of a concept are fulfilled by the template argument. The axioms are left unchecked. Since axioms denote semantics that are not statically evaluable (or often impossible to check entirely), the author of a type would have to explicitly state that their type meets all the requirements of a concept. This was known as concept mapping in previous designs but has since been removed.
Examples
Here are some examples of concepts:
Regular types are constructable, destructable, copyable, and can be compared.
Ordered types support operator<, and have a strict total ordering and other ordering semantics.
Copyable types are copy constructable, destructable, and if x is equal to y and x is copied, the copy will also compare equal to y.
Iterator types must have associated types value_type, reference, difference_type, and iterator_category which themselves must meet certain concepts. They must also support operator++ and be dereferenceable.
The Road to Concepts
Constraints are the first step towards a full concepts feature of C++. They are a very important step, because they provide the statically enforceable requirements of types so that we can write much cleaner template functions and classes. Now we can avoid some of the difficulties and ugliness of std::enable_if and its metaprogramming friends.
However, there are a number of things that the constraints proposal does not do:
It does not provide a concept definition language.
Constraints are not concept maps. The user does not need to specifically annotate their types as meeting certain constraints. They are statically checked used simple compile-time language features.
The implementations of templates are not constrained by the constraints on their template arguments. That is, if your function template does anything with an object of constrained type that it shouldn't do, the compiler has no way to diagnose that. A fully featured concepts proposal would be able to do this.
The constraints proposal has been designed specifically so that a full concepts proposal can be introduced on top of it. With any luck, that transition should be a fairly smooth ride. The concepts group are looking to introduce constraints for C++14 (or in a technical report soon after), while full concepts might start to emerge sometime around C++17.
See also "what's 'lite' about concepts lite" in section 2.3 of the recent (March 12) Concepts telecon minutes and record of discussion, which were posted the same day here: http://isocpp.org/blog/2013/03/new-paper-n3576-sg8-concepts-teleconference-minutes-2013-03-12-herb-sutter .
My 2 cents:
The concepts-lite proposal is not meant to do "type checking" of template implementation. I.e., Concepts-lite will ensure (notionally) interface compatibility at the template instantiation site. Quoting from the paper: "concepts lite is an extension of C++ that allows the use of predicates to constrain template arguments". And that's it. It does not say that template body will be checked (in isolation) against the predicates. That probably means there is no first-class notion of archtypes when you are talking about concepts-lite. archtypes, if I remember correctly, in concepts-heavy proposal are types that offer no less and no more to satisfy the implementation of the template.
concepts-lite use glorified constexpr functions with a bit of syntax trick supported by the compiler. No changes in the lookup rules.
Programmers are not required to write concepts maps.
Finally, quoting again "The constraints proposal does not directly address the specification or use of semantics; it is targeted only at checking syntax." That would mean axioms are not within the scope (so far).
(Preamble: I am a late follower to the C++0x game and the recent controversy regarding the removal of concepts from the C++0x standard has motivated me to learn more about them. While I understand that all of my questions are completely hypothetical -- insofar as concepts won't be valid C++ code for some time to come, if at all -- I am still interested in learning more about concepts, especially given how it would help me understand more fully the merits behind the recent decision and the controversy that has followed)
After having read some introductory material on concepts as C++0x (until recently) proposed them, I am having trouble wrapping my mind around some syntactical issues. Without further ado, here are my questions:
1) Would a type that supports a particular derived concept (either implicitly, via the auto keyword, or explicitly via concept_maps) also need to support the base concept indepdendently? In other words, does the act of deriving a concept from another (e.g. concept B<typename T> : A<T>) implicitly include an 'invisible' requires statement (within B, requires A<T>;)? The confusion arises from the Wikipedia page on concepts which states:
Like in class inheritance, types that
meet the requirements of the derived
concept also meet the requirements of
the base concept.
That seems to say that a type only needs to satisfy the derived concept's requirements and not necessarily the base concept's requirements, which makes no sense to me. I understand that Wikipedia is far from a definitive source; is the above description just a poor choice of words?
2) Can a concept which lists typenames be 'auto'? If so, how would the compiler map these typenames automatically? If not, are there any other occasions where it would be invalid to use 'auto' on a concept?
To clarify, consider the following hypothetical code:
template<typename Type>
class Dummy {};
class Dummy2 { public: typedef int Type; };
auto concept SomeType<typename T>
{
typename Type;
}
template<typename T> requires SomeType<T>
void function(T t)
{}
int main()
{
function(Dummy<int>()); //would this match SomeType?
function(Dummy2()); //how about this?
return 0;
}
Would either of those classes match SomeType? Or is a concept_map necessary for concepts involving typenames?
3) Finally, I'm having a hard time understanding what axioms would be allowed to define. For example, could I have a concept define an axiom which is logically inconsistent, such as
concept SomeConcept<typename T>
{
T operator*(T&, int);
axiom Inconsistency(T a)
{
a * 1 == a * 2;
}
}
What would that do? Is that even valid?
I appreciate that this is a very long set of questions and so I thank you in advance.
I've used the most recent C++0x draft, N2914 (which still has concepts wording in it) as a reference for the following answer.
1) Concepts are like interfaces in that. If your type supports a concept, it should also support all "base" concepts. Wikipedia statement you quote makes sense from the point of view of a type's client - if he knows that T satisfies concept Derived<T>, then he also knows that it satisfies concept Base<T>. From type author perspective, this naturally means that both have to be implemented. See 14.10.3/2.
2) Yes, a concept with typename members can be auto. Such members can be automatically deduced if they are used in definitions of function members in the same concept. For example, value_type for iterator can be deduced as a return type of its operator*. However, if a type member is not used anywhere, it will not be deduced, and thus will not be implicitly defined. In your example, there's no way to deduce SomeType<T>::Type for either Dummy or Dummy1, as Type isn't used by other members of the concept, so neither class will map to the concept (and, in fact, no class could possibly auto-map to it). See 14.10.1.2/11 and 14.10.2.2/4.
3) Axioms were a weak point of the spec, and they were being constantly updated to make some (more) sense. Just before concepts were pulled from the draft, there was a paper that changed quite a bit - read it and see if it makes more sense to you, or you still have questions regarding it.
For your specific example (accounting for syntactic difference), it would mean that compiler would be permitted to consider expression (a*1) to be the same as (a*2), for the purpose of the "as-if" rule of the language (i.e. the compiler permitted to do any optimizations it wants, so long as the result behaves as if there were none). However, the compiler is not in any way required to validate the correctness of axioms (hence why they're called axioms!) - it just takes them for what they are.