C++20 introduces a comparison concept boolean-testable, but I noticed its italics and the hyphens in the middle, indicating that it is for exposition-only, and since there is no so-called std::boolean_testable in <concepts>, we cannot use it in our own code.
What is the purpose of this exposition-only concept? And why is this concept so mysterious?
boolean-testable came out of LWG's repeated attempts to specify exactly when a type is sufficiently "boolean-ish" to be suitable for use as the result of predicates and comparisons.
At first, the formulations were simply "convertible to bool" and in C++11 "contextually convertible to bool", but LWG2114 pointed out that this is insufficient: if the only requirement on something is that it is convertible to bool, then the only thing you can do with it is to convert it to bool. You can't write !pred(x), or i != last && pred(*i), because ! and && might be overloaded to do whatever. That will require littering code with explicit bool casts everywhere.
What the library really meant was "it converts to bool when we want it to", but that turns out to be really hard to express: we want b1 && b2 to engage the short-circuiting magic of built-in operator &&, even when b1 and b2 are different "boolean-ish" types. But when looking at the type of b1 in isolation, we have no idea what other "boolean-ish" types may be out there. It's basically impossible to analyze the type in isolation.
Then the Ranges TS came along, and with it an attempt to specify a Boolean concept. It's an enormously complicated concept - with more than a dozen expression requirements - that still fails to solve the mixed-type comparison problem, and adds its own issues. For instance, it requires bool(b1 == b2) to be equal to bool(b1) == bool(b2) and bool(b1 == bool(b2)), which means that int doesn't model Boolean unless it is restricted to the domain {0, 1}.
With C++20 about to ship, those problems led P1934R0 to propose throwing in the towel: just require the type to model convertible_to<bool>, and require users to cast it to bool when needed. That came with its own problems, as P1964R0 pointed out, especially now that we are shipping concepts for public consumption: do we really want to force users to litter their code that are constrained using standard library concepts with casts to bool? Especially if only a miniscule fraction of users are using pathological types that overload && and || anyway, and no standard library implementation supports such types?
The final result is boolean-testable, which is designed to ensure that you can use ! (just once - !!x is not required to work), && and || on the result of the predicate/comparison and get the normal semantics (including short-circuiting for && and ||). To solve the mixed-type problem, its specification contains a complex blob of standardese talking about name lookup and template argument deduction and implicit conversion sequences, but it really boils down to "don't be dumb". P1964R2 includes a detailed wording rationale.
Why is it exposition-only? boolean-testable came really late in the C++20 cycle: LEWG approved P1964's direction on Friday afternoon in Belfast (the November 2019 meeting, one meeting before C++20 shipped), and it is much lower risk to have an exposition-only concept than a named one, especially as there wasn't a lot of motivation for making it public either. Certainly nobody in the LEWG room asked for it to be named.
Its purpose, like all exposition-only concepts, is to simplify the specification in the standard. It's simply a building block for specifying other (potentially user-facing) concepts without needing to repeat the thing the concept models. Of note, it appears in the specification of another exposition-only concept
template<class T, class U>
concept weakly-equality-comparable-with = // exposition only
requires(const remove_reference_t<T>& t,
const remove_reference_t<U>& u) {
{ t == u } -> boolean-testable;
{ t != u } -> boolean-testable;
{ u == t } -> boolean-testable;
{ u != t } -> boolean-testable;
};
weakly-equality-comparable-with is satisfied for types that overload the comparison operators with a return type that isn't verbatim bool necessarily. We can still use these expressions to compare objects, and so the standard seeks to reason about them. And it's not an hypothetical, they can appear in the wild. An example from the Palo Alto report:
... One such example is an early version of the QChar class (1.5 and earlier, at least) (Nokia Corporation, 2011).
class QChar
{
friend int operator==(QChar c1, QChar c2);
friend int operator!=(QChar c1, QChar c2);
};
We should be able to use this class in our standard algorithms, despite the fact that the operator does not return a bool.
As for your other question
And why is this concept so mysterious?
It isn't. But if one examines it on cppreference alone, one may miss out on context since it may not be easy to cross-reference it there.
You can use std::_Boolean_testable<T>
Related
The concept equality_comparable_with<T, U> is intended to declare that objects of type T and U can be compared equal to each other, and if they are, then this has the expected meaning. That's fine.
However, this concept also requires common_reference_t<T&, U&> to exist. The primary impetus for common_reference and its attendant functionality seems to be to enable proxy iterators, to have a place to represent the relationship between reference and value_type for such iterators.
That's great, but... what does that have to do with testing if a T and a U can be compared equal to one another? Why does the standard require that T and U have a common reference relationship just to allow you to compare them equal?
This creates oddball situations where it's very difficult to have two types which don't reasonably have a common-reference relationship that are logically comparable. For example, vector<int> and pmr::vector<int> logically ought to be comparable. But they can't be because there's no reasonable common-reference between the two otherwise unrelated types.
This goes all the way back to the Palo Alto report, §3.3 and D.2.
For the cross-type concepts to be mathematically sound, you need to define what the cross-type comparison means. For equality_comparable_with, t == u generally means that t and u are equal, but what does it even mean for two values of different types to be equal? The design says that cross-type equality is defined by mapping them to the common (reference) type (this conversion is required to preserve the value).
Where the strong axioms of equality_comparable_with is not desirable, the standard uses the exposition-only concept weakly-equality-comparable-with, and that concept doesn't require a common reference. It is, however, a "semantic abomination" (in the words of Casey Carter) and is exposition-only for that reason: it allows having t == u and t2 == u but t != t2 (this is actually necessary for sentinels).
Does the C++ standard guarantee that (x!=y) always has the same truth value as !(x==y)?
I know there are many subtleties involved here: The operators == and != may be overloaded. They may be overloaded to have different return types (which only have to be implicitly convertible to bool). Even the !-operator might be overloaded on the return type. That's why I handwavingly referred to the "truth value" above, but trying to elaborate it further, exploiting the implicit conversion to bool, and trying to eliminate possible ambiguities:
bool ne = (x!=y);
bool e = (x==y);
bool result = (ne == (!e));
Is result guaranteed to be true here?
The C++ standard specifies the equality operators in section 5.10, but mainly seems to define them syntactically (and some semantics regarding pointer comparisons). The concept of being EqualityComparable exists, but there is no dedicated statement about the relationship of its operator == to the != operator.
There exist related documents from C++ working groups, saying that...
It is vital that equal/unequal [...] behave as boolean negations of each other. After all, the world would make no sense if both operator==() and operator!=() returned false! As such, it is common to implement these operators in terms of each other
However, this only reflects the Common Sense™, and does not specify that they have to be implemented like this.
Some background: I'm just trying to write a function that checks whether two values (of unknown type) are equal, and print an error message if this is not the case. I'd like to say that the required concept here is that the types are EqualityComparable. But for this, one would still have to write if (!(x==y)) {…} and could not write if (x!=y) {…}, because this would use a different operator, which is not covered with the concept of EqualityComparable at all, and which might even be overloaded differently...
I know that the programmer basically can do whatever he wants in his custom overloads. I just wondered whether he is really allowed to do everything, or whether there are rules imposed by the standard. Maybe one of these subtle statements that suggest that deviating from the usual implementation causes undefined behavior, like the one that NathanOliver mentioned in a comment, but which seemed to only refer to certain types. For example, the standard explicitly states that for container types, a!=b is equivalent to !(a==b) (section 23.2.1, table 95, "Container requirements").
But for general, user-defined types, it currently seems that there are no such requirements. The question is tagged language-lawyer, because I hoped for a definite statement/reference, but I know that this may nearly be impossible: While one could point out the section where it said that the operators have to be negations of each other, one can hardly prove that none of the ~1500 pages of the standard says something like this...
In doubt, and unless there are further hints, I'll upvote/accept the corresponding answers later, and for now assume that for comparing not-equality for EqualityComparable types should be done with if (!(x==y)) to be on the safe side.
Does the C++ standard guarantee that (x!=y) always has the same truth value as !(x==y)?
No it doesn't. Absolutely nothing stops me from writing:
struct Broken {
bool operator==(const Broken& ) const { return true; }
bool operator!=(const Broken& ) const { return true; }
};
Broken x, y;
That is perfectly well-formed code. Semantically, it's broken (as the name might suggest), but there's certainly nothing wrong from it from a pure C++ code functionality perspective.
The standard also clearly indicates this is okay in [over.oper]/7:
The identities among certain predefined operators applied to basic types (for example, ++a ≡ a+=1) need not hold for operator functions. Some predefined operators, such as +=, require an operand to be an lvalue when applied to basic types; this is not required by operator functions.
In the same vein, nothing in the C++ standard guarantees that operator< actually implements a valid Ordering (or that x<y <==> !(x>=y), etc.). Some standard library implementations will actually add instrumentation to attempt to debug this for you in the ordered containers, but that is just a quality of implementation issue and not a standards-compliant-based decision.
Library solutions like Boost.Operators exist to at least make this a little easier on the programmer's side:
struct Fixed : equality_comparable<Fixed> {
bool operator==(const Fixed&) const;
// a consistent operator!= is provided for you
};
In C++14, Fixed is no longer an aggregate with the base class. However, in C++17 it's an aggregate again (by way of P0017).
With the adoption of P1185 for C++20, the library solution has effectively becomes a language solution - you just have to write this:
struct Fixed {
bool operator==(Fixed const&) const;
};
bool ne(Fixed const& x, Fixed const& y) {
return x != y;
}
The body of ne() becomes a valid expression that evaluates as !x.operator==(y) -- so you don't have to worry about keeping the two comparison in line nor rely on a library solution to help out.
In general, I don't think you can rely on it, because it doesn't always make sense for operator == and operator!= to always correspond, so I don't see how the standard could ever require it.
For example, consider the built-in floating point types, like doubles, for which NaNs always compare false, so operator== and operator!= can both return false at the same time. (Edit: Oops, this is wrong; see hvd's comment.)
As a result, if I'm writing a new class with floating point semantics (maybe a really_long_double), I have to implement the same behaviour to be consistent with the primitive types, so my operator== would have to behave the same and compare two NaNs as false, even though operator!= also compares them as false.
This might crop up in other circumstances, too. For example, if I was writing a class to represent a database nullable value I might run into the same issue, because all comparisons to database NULL are false. I might choose to implement that logic in my C++ code to have the same semantics as the database.
In practice, though, for your use case, it might not be worth worrying about these edge cases. Just document that your function compares the objects using operator== (or operator !=) and leave it at that.
No. You can write operator overloads for == and != that do whatever you wish. It probably would be a bad idea to do so, but the definition of C++ does not constrain those operators to be each other's logical opposites.
In C++, only one user-defined conversion is allowed in implicit conversion sequence. Are there any practical reasons (from language user point of view) for that limit?
Allowing only one user defined conversion limits the search scope when attempting to match the source and destination types. Only those two types need to be checked to determine whether they are convertible (non-user defined conversions aside).
Not having that limit might cause ambiguity in some cases and it might even require testing infinite conversion paths in others. Since the standard cannot require to do something unless it is impossible to do in your particular case, the simple rule is only one conversion.
Consider as a convoluted example:
template <typename T>
struct A {
operator A<A<T>> () const; // A<int> --> A<A<int>>
};
int x = A<int>();
Now, there can potentially be a specialization for A<A<A<....A<int>...>>> that might have a conversion to int, so the compiler would have to recursively instantiate infinite versions of A and check whether each one of them is convertible to int.
Conversely, with two types that are convertible from-to any other type, it would cause other issues:
struct A {
template <typename T>
A(T);
};
struct B {
template <typename T>
operator T() const;
};
struct C {
C(A);
operator B() const;
};
If multiple user-defined conversions where allowed, any type T can be converted to any type U by means of the conversion path: T -> A -> C -> B -> U. Just managing the search space would be a hard task for the compiler, and it would most probably cause confusion on the users.
IMHO It is a design choice based on the fact that constructors are not explicit by default. Which is a practical reason for setting a limit, to disallow the following expression to be valid.
objectn o = 5;
5-> object1(5)->object2(object1)->object3(object2)->...->objectn(objectn-1)
Each arrow above is an implicit conversion. One seems to be a reasonable choice.If more are allowed, you have several implicit conversion paths between an object0 and objectn . Each one leading to possibly different objectn o. Which one to choose??
If the allowed number were infinite, you could quite quickly end up with an attempted circular conversion sequence.
It's much easier to say "if you need more than one conversion, you have a design flaw" and use that rationale to emplace a bug-saving limit within the language.
In any case, implicit conversions are generally considered to be best when used sporadically. One conversion is very useful for the sort of heavy operator overloading used by the standard library, and similar code; beyond that, there's not much excuse to ever use implicit conversions. Certainly a language would go in the direction of seeking fewer, not more (e.g. C++11 adding explicit in this context, to help you enforce no implicit conversions at all).
If it allows more than one, then how many in the sequence? If infinite, then wouldn't it make too complicated for a large number of types in a large project? For example, everytime you add implicit conversion, you have to think really really hard as to what else it converts into and then what else that converts into? and the chain goes on. Very dangerous situation, IMHO.
Implicit conversions (what is currently allowed by the language) are considered bad, which is why C++11 has added contextual conversion which is implemented using explicit keyword.
I want to know what are the semantic differences between the C++ full concepts proposal and template constraints (for instance, constraints as appeared in Dlang or the new concepts-lite proposal for C++1y).
What are full-fledged concepts capable of doing than template constraints cannot do?
The following information is out of date. It needs to be updated according to the latest Concepts Lite draft.
Section 3 of the constraints proposal covers this in reasonable depth.
The concepts proposal has been put on the back burners for a short while in the hope that constraints (i.e. concepts-lite) can be fleshed out and implemented in a shorter time scale, currently aiming for at least something in C++14. The constraints proposal is designed to act as a smooth transition to a later definition of concepts. Constraints are part of the concepts proposal and are a necessary building block in its definition.
In Design of Concept Libraries for C++, Sutton and Stroustrup consider the following relationship:
Concepts = Constraints + Axioms
To quickly summarise their meanings:
Constraint - A predicate over statically evaluable properties of a type. Purely syntactic requirements. Not a domain abstraction.
Axioms - Semantic requirements of types that are assumed to be true. Not statically checked.
Concepts - General, abstract requirements of algorithms on their arguments. Defined in terms of constraints and axioms.
So if you add axioms (semantic properties) to constraints (syntactic properties), you get concepts.
Concepts-Lite
The concepts-lite proposal brings us only the first part, constraints, but this is an important and necessary step towards fully-fledged concepts.
Constraints
Constraints are all about syntax. They give us a way of statically discerning properties of a type at compile-time, so that we can restrict the types used as template arguments based on their syntactic properties. In the current proposal for constraints, they are expressed with a subset of propositional calculus using logical connectives like && and ||.
Let's take a look at a constraint in action:
template <typename Cont>
requires Sortable<Cont>()
void sort(Cont& container);
Here we are defining a function template called sort. The new addition is the requires clause. The requires clause gives some constraints over the template arguments for this function. In particular, this constraint says that the type Cont must be a Sortable type. A neat thing is that it can be written in a more concise form as:
template <Sortable Cont>
void sort(Cont& container);
Now if you attempt to pass anything that is not considered Sortable to this function, you'll get a nice error that immediately tells you that the type deduced for T is not a Sortable type. If you had done this in C++11, you'd have had some horrible error thrown from inside the sort function that makes no sense to anybody.
Constraints predicates are very similar to type traits. They take some template argument type and give you some information about it. Constraints attempt to answer the following kinds of questions about type:
Does this type have such-and-such operator overloaded?
Can these types be used as operands to this operator?
Does this type have such-and-such trait?
Is this constant expression equal to that? (for non-type template arguments)
Does this type have a function called yada-yada that returns that type?
Does this type meet all the syntactic requirements to be used as that?
However, constraints are not meant to replace type traits. Instead, they will work hand in hand. Some type traits can now be defined in terms of concepts and some concepts in terms of type traits.
Examples
So the important thing about constraints is that they do not care about semantics one iota. Some good examples of constraints are:
Equality_comparable<T>: Checks whether the type has == with both operands of that same type.
Equality_comparable<T,U>: Checks whether there is a == with left and right operands of the given types
Arithmetic<T>: Checks whether the type is an arithmetic type.
Floating_point<T>: Checks whether the type is a floating point type.
Input_iterator<T>: Checks whether the type supports the syntactic operations that an input iterator must support.
Same<T,U>: Checks whether the given type are the same.
You can try all this out with a special concepts-lite build of GCC.
Beyond Concepts-Lite
Now we get into everything beyond the concepts-lite proposal. This is even more futuristic than the future itself. Everything from here on out is likely to change quite a bit.
Axioms
Axioms are all about semantics. They specify relationships, invariants, complexity guarantees, and other such things. Let's look at an example.
While the Equality_comparable<T,U> constraint will tell you that there is an operator== that takes types T and U, it doesn't tell you what that operation means. For that, we will have the axiom Equivalence_relation. This axiom says that when objects of these two types are compared with operator== giving true, these objects are equivalent. This might seem redundant, but it's certainly not. You could easily define an operator== that instead behaved like an operator<. You'd be evil to do that, but you could.
Another example is a Greater axiom. It's all well and good to say two objects of type T can be compared with > and < operators, but what do they mean? The Greater axiom says that iff x is greater then y, then y is less than x. The proposed specification such an axiom looks like:
template<typename T>
axiom Greater(T x, T y) {
(x>y) == (y<x);
}
So axioms answer the following types of questions:
Do these two operators have this relationship with each other?
Does this operator for such-and-such type mean this?
Does this operation on that type have this complexity?
Does this result of that operator imply that this is true?
That is, they are concerned entirely with the semantics of types and operations on those types. These things cannot be statically checked. If this needs to be checked, a type must in some way proclaim that it adheres to these semantics.
Examples
Here are some common examples of axioms:
Equivalence_relation: If two objects compare ==, they are equivalent.
Greater: Whenever x > y, then y < x.
Less_equal: Whenever x <= y, then !(y < x).
Copy_equality: For x and y of type T: if x == y, a new object of the same type created by copy construction T{x} == y and still x == y (that is, it is non-destructive).
Concepts
Now concepts are very easy to define; they are simply the combination of constraints and axioms. They provide an abstract requirement over the syntax and semantics of a type.
As an example, consider the following Ordered concept:
concept Ordered<Regular T> {
requires constraint Less<T>;
requires axiom Strict_total_order<less<T>, T>;
requires axiom Greater<T>;
requires axiom Less_equal<T>;
requires axiom Greater_equal<T>;
}
First note that for the template type T to be Ordered, it must also meet the requirements of the Regular concept. The Regular concept is a very basic requirements that the type is well-behaved - it can be constructed, destroyed, copied and compared.
In addition to those requirements, the Ordered requires that T meet one constraint and four axioms:
Constraint: An Ordered type must have an operator<. This is statically checked so it must exist.
Axioms: For x and y of type T:
x < y gives a strict total ordering.
When x is greater than y, y is less than x, and vice versa.
When x is less than or equal to y, y is not less than x, and vice versa.
When x is greater than or equal to y, y is not greater than x, and vice versa.
Combining constraints and axioms like this gives you concepts. They define the syntactic and semantic requirements for abstract types for use with algorithms. Algorithms currently have to assume that the types used will support certain operations and express certain semantics. With concepts, we'll be able to ensure that requirements are met.
In the latest concepts design, the compiler will only check that the syntactic requirements of a concept are fulfilled by the template argument. The axioms are left unchecked. Since axioms denote semantics that are not statically evaluable (or often impossible to check entirely), the author of a type would have to explicitly state that their type meets all the requirements of a concept. This was known as concept mapping in previous designs but has since been removed.
Examples
Here are some examples of concepts:
Regular types are constructable, destructable, copyable, and can be compared.
Ordered types support operator<, and have a strict total ordering and other ordering semantics.
Copyable types are copy constructable, destructable, and if x is equal to y and x is copied, the copy will also compare equal to y.
Iterator types must have associated types value_type, reference, difference_type, and iterator_category which themselves must meet certain concepts. They must also support operator++ and be dereferenceable.
The Road to Concepts
Constraints are the first step towards a full concepts feature of C++. They are a very important step, because they provide the statically enforceable requirements of types so that we can write much cleaner template functions and classes. Now we can avoid some of the difficulties and ugliness of std::enable_if and its metaprogramming friends.
However, there are a number of things that the constraints proposal does not do:
It does not provide a concept definition language.
Constraints are not concept maps. The user does not need to specifically annotate their types as meeting certain constraints. They are statically checked used simple compile-time language features.
The implementations of templates are not constrained by the constraints on their template arguments. That is, if your function template does anything with an object of constrained type that it shouldn't do, the compiler has no way to diagnose that. A fully featured concepts proposal would be able to do this.
The constraints proposal has been designed specifically so that a full concepts proposal can be introduced on top of it. With any luck, that transition should be a fairly smooth ride. The concepts group are looking to introduce constraints for C++14 (or in a technical report soon after), while full concepts might start to emerge sometime around C++17.
See also "what's 'lite' about concepts lite" in section 2.3 of the recent (March 12) Concepts telecon minutes and record of discussion, which were posted the same day here: http://isocpp.org/blog/2013/03/new-paper-n3576-sg8-concepts-teleconference-minutes-2013-03-12-herb-sutter .
My 2 cents:
The concepts-lite proposal is not meant to do "type checking" of template implementation. I.e., Concepts-lite will ensure (notionally) interface compatibility at the template instantiation site. Quoting from the paper: "concepts lite is an extension of C++ that allows the use of predicates to constrain template arguments". And that's it. It does not say that template body will be checked (in isolation) against the predicates. That probably means there is no first-class notion of archtypes when you are talking about concepts-lite. archtypes, if I remember correctly, in concepts-heavy proposal are types that offer no less and no more to satisfy the implementation of the template.
concepts-lite use glorified constexpr functions with a bit of syntax trick supported by the compiler. No changes in the lookup rules.
Programmers are not required to write concepts maps.
Finally, quoting again "The constraints proposal does not directly address the specification or use of semantics; it is targeted only at checking syntax." That would mean axioms are not within the scope (so far).
(Preamble: I am a late follower to the C++0x game and the recent controversy regarding the removal of concepts from the C++0x standard has motivated me to learn more about them. While I understand that all of my questions are completely hypothetical -- insofar as concepts won't be valid C++ code for some time to come, if at all -- I am still interested in learning more about concepts, especially given how it would help me understand more fully the merits behind the recent decision and the controversy that has followed)
After having read some introductory material on concepts as C++0x (until recently) proposed them, I am having trouble wrapping my mind around some syntactical issues. Without further ado, here are my questions:
1) Would a type that supports a particular derived concept (either implicitly, via the auto keyword, or explicitly via concept_maps) also need to support the base concept indepdendently? In other words, does the act of deriving a concept from another (e.g. concept B<typename T> : A<T>) implicitly include an 'invisible' requires statement (within B, requires A<T>;)? The confusion arises from the Wikipedia page on concepts which states:
Like in class inheritance, types that
meet the requirements of the derived
concept also meet the requirements of
the base concept.
That seems to say that a type only needs to satisfy the derived concept's requirements and not necessarily the base concept's requirements, which makes no sense to me. I understand that Wikipedia is far from a definitive source; is the above description just a poor choice of words?
2) Can a concept which lists typenames be 'auto'? If so, how would the compiler map these typenames automatically? If not, are there any other occasions where it would be invalid to use 'auto' on a concept?
To clarify, consider the following hypothetical code:
template<typename Type>
class Dummy {};
class Dummy2 { public: typedef int Type; };
auto concept SomeType<typename T>
{
typename Type;
}
template<typename T> requires SomeType<T>
void function(T t)
{}
int main()
{
function(Dummy<int>()); //would this match SomeType?
function(Dummy2()); //how about this?
return 0;
}
Would either of those classes match SomeType? Or is a concept_map necessary for concepts involving typenames?
3) Finally, I'm having a hard time understanding what axioms would be allowed to define. For example, could I have a concept define an axiom which is logically inconsistent, such as
concept SomeConcept<typename T>
{
T operator*(T&, int);
axiom Inconsistency(T a)
{
a * 1 == a * 2;
}
}
What would that do? Is that even valid?
I appreciate that this is a very long set of questions and so I thank you in advance.
I've used the most recent C++0x draft, N2914 (which still has concepts wording in it) as a reference for the following answer.
1) Concepts are like interfaces in that. If your type supports a concept, it should also support all "base" concepts. Wikipedia statement you quote makes sense from the point of view of a type's client - if he knows that T satisfies concept Derived<T>, then he also knows that it satisfies concept Base<T>. From type author perspective, this naturally means that both have to be implemented. See 14.10.3/2.
2) Yes, a concept with typename members can be auto. Such members can be automatically deduced if they are used in definitions of function members in the same concept. For example, value_type for iterator can be deduced as a return type of its operator*. However, if a type member is not used anywhere, it will not be deduced, and thus will not be implicitly defined. In your example, there's no way to deduce SomeType<T>::Type for either Dummy or Dummy1, as Type isn't used by other members of the concept, so neither class will map to the concept (and, in fact, no class could possibly auto-map to it). See 14.10.1.2/11 and 14.10.2.2/4.
3) Axioms were a weak point of the spec, and they were being constantly updated to make some (more) sense. Just before concepts were pulled from the draft, there was a paper that changed quite a bit - read it and see if it makes more sense to you, or you still have questions regarding it.
For your specific example (accounting for syntactic difference), it would mean that compiler would be permitted to consider expression (a*1) to be the same as (a*2), for the purpose of the "as-if" rule of the language (i.e. the compiler permitted to do any optimizations it wants, so long as the result behaves as if there were none). However, the compiler is not in any way required to validate the correctness of axioms (hence why they're called axioms!) - it just takes them for what they are.