Dependent signature specialization - sml

Can I specialize a type in a signature using types before that type and in the signature? Here is an example:
signature A = sig
type t
type s
end
Can I specialize A by the following?
signature B = A where type s = t list
Both SML/NJ and Mlton complain that t is unbound.

No, that indeed cannot be done directly. The reasons are rather technical, it is not easy to ascribe a well-behaved semantics to such an operation in the general case.
The closest you can get is by introducing another auxiliary type:
signature B =
sig
type t'
include A with type t = t' with type s = t' list
end

Related

what's the original type for a member function template during partial ordering

Consider this example
struct A { };
template<class T> struct B {
template<class R> int operator*(R&); // #1
};
template<class T, class R> int operator*(T&, R&); // #2
The partial ordering will apply to #1 and #2 to select the best viable function template.
Two sets of types are used to determine the partial ordering. For each of the templates involved there is the original function type and the transformed function type. The deduction process uses the transformed type as the argument template and the original type of the other template as the parameter template. This process is done twice for each type involved in the partial ordering comparison: once using the transformed template-1 as the argument template and template-2 as the parameter template and again using the transformed template-2 as the argument template and template-1 as the parameter template.
Partial ordering selects which of two function templates is more specialized than the other by transforming each template in turn (see next paragraph)
To produce the transformed template, for each type, non-type, or template template parameter (including template parameter packs thereof) synthesize a unique type, value, or class template respectively and substitute it for each occurrence of that parameter in the function type of the template. [ Note: The type replacing the placeholder in the type of the value synthesized for a non-type template parameter is also a unique synthesized type.  — end note ] If only one of the function templates M is a non-static member of some class A, M is considered to have a new first parameter inserted in its function parameter list. Given cv as the cv-qualifiers of M (if any), the new parameter is of type “rvalue reference to cv A” if the optional ref-qualifier of M is && or if M has no ref-qualifier and the first parameter of the other template has rvalue reference type. Otherwise, the new parameter is of type “lvalue reference to cv A”.
So, the original type for #2 is int operator*(T&, R&) and its transformed type is int operator*(UniqueA&, UniqueB&), there's no doubt to the original type of #2. However, I don't know what's the original type for #1(member function template).
The structure of that rule seems that the emphasized part in the above rule should be considered as a step of producing the transformed template.
So, whether the original type of #1 is int operator*(B<T>&, R&) or int operator*(R&). If it's the latter, that wouldn't consistent with common sense. Since int operator*(R&) and int operator*(T&, R&) do not match the number of parameters, how to compare them (A against P)?
How to read the rule for producing the transformed template correctly? If the emphasized part is not considered as a step of the transformation, instead it's a general rule for member function during the partial ordering, Does the rule make it misleading to place such a rule after the process of the transformation?
Yes, it's a bit of a mess. As you've observed, it doesn't make sense for the "original type" of a class non-static member function to lack the inserted this parameter.
The only way to make it work is for the subsequent paragraphs in subclause 3 to apply to the "original" function type on the other side of [temp.deduct.partial] as well as the "transformed" function type. You could just about read it that way for what is presently the first sentence of the second paragraph, reading "each" as applying to both the transformed and original function type:
Each function template M that is a member function is considered to have a new first parameter of type X(M) [...]
However, since the resolution to CWG 2445 in P2108 we have another sentence:
If exactly one of the function templates was considered [...] via a rewritten candidate with a reversed order of parameters, then the order of the function parameters in its transformed template is reversed.
So we quite clearly have this reversal applying asymmetrically, giving an absurd result. On the upside, it's fairly clear how it should read; the adjustments to function type (this insertion and parameter reversal) should apply prior to unique type synthesis/substitution, and apply to the "original" and transformed function type equally.
As far as I can tell, this defect does not appear to have been reported to the Core Language Working Group; it does not appear in the C++ Standard Core Language Active Issues list.

Meaning of "deduced A" in the context of type deduction from a call

If P is a class and P has the form simple-template-id, then the transformed A can be a derived class D of the deduced A.
from [temp.deduct.call]4.3
This sentence describes how a function template argument is still valid if it is derived from "deduced A", However, there is not solid definition for what "deduced A" actually is.
My theory is that deduced A is original P with template arguments from A substituted in, but this would break the rules of type deduction trying to find template arguments to make A and deduced A identical, as there would be cases with A being a non-reference and deduced A being a reference.
The goal of function template argument deduction is to figure out which particular specialization of a function template should be used in places where the template name is used like a function name. For example, given a function template
template <typename T>
void f(T* value) {}
when you then have a function call like
int* a = &x;
f(a);
the name f here is not actually the name of a function but the name of a function template. The compiler has to figure out which concrete specialization of the function template this call should actually be calling based on the types of the arguments given in the function call. In other words, it has to figure out which template argument X should be used for the template parameter T to get to an actual function f<X> that could be called here like that. This is a bit of an inverse problem compared to a normal function call. Rather than having to make a list of arguments fit a given signature (by applying conversions), we're now having to make a signature fit a given list of arguments. Another way of looking at it is as trying to deduce template arguments that will make the type of each function parameter match the type of each function call argument. This is what [temp.deduct.call]/4 is talking about here:
In general, the deduction process attempts to find template argument values that will make the deduced A identical to A
Taking our example above, given some deduced template argument X, the deduced argument type is what we get by substituting our deduced X for T into our function parameter type T* (i.e., the type of argument this function parameter takes). If we deduce X to be int, substituting int for T into T* makes our deduced argument type come out to be int*. Since the deduced argument type int* is identical to the type of the actual argument, we've found that the function f<int> is what we were looking for.
To make all of this consistent with how normal function calls behave, there are a few corner cases to take care of. In particular with function call arguments of array and function types, where we normally have array-to-pointer and function-to-pointer decay, as well as top-level const. To deal with this, the standard specifies that the argument type A we're trying to match is not simply taken to be the type of the corresponding function call argument directly but is first transformed by applying the array-to-pointer, function-to-pointer, etc. conversions. This transformed A is the A we're actually trying to make our deduced argument type match. This is just to explain why the standard talks about a "transformed A" there. It's not really that important to the question at hand. The transformed A is just the function argument type we're actually trying to match.
Now, let's say we have some
template <typename T> class B {};
and some derived class
class D : public B<int> {};
When you then have a function template like
template <typename T>
void f(const B<T>*) {}
and a function call like this
D d;
f(&d);
there is no template argument X you could pick for T that would make the deduced argument type const B<X>* equal to D*. But since D is derived from B<int>, deducing the template argument to be int would nevertheless lead to a function specialization f<int> that could take the call. The whole paragraph [temp.deduct.call]/4.3 and especially the sentence from your question
If P is a class and P has the form simple-template-id, then the transformed A can be a derived class D of the deduced A.
is there to allow exactly this to work…

expression 'T' is of type 'type int' and has to be discarded

Say I just want a template to "generate" a type, from a generic argument, and use the template's invocation in places where a type is expected:
template p[T] = T
var a: p[int]()
(3, 14) Error: expression 'T' is of type 'type int' and has to be
discarded
lol, really ?
I hope I'm just being a newbie and there is indeed a way (hopefully uncontrived) to do that.
Note that it's the same output, with a non generic attempt:
template p(t: typedesc) = t
var a: p(int)
EDIT: reading this insightful answer, I realized the system might feel more patted on the back if we specified the templates's return type; adding : untyped before = t got the previous snippets to build. any explanation ?
template p[T] = T
var a: p[int]()
This is the same as:
template p[T]: void = T
var a: p[int]()
You're telling the compiler that your template returns nothing, this is why it complains.
So you need to specify the return type...
template p[T]: typedesc = T
var a: p[int]()
Then it works fine. This behaviour extends to procedures and methods in Nim, not specifying a return type means that there is no return value.
Compile-time functions mapping from types to types in Nim are usually implemented with typedesc parameters. Compared to generic params, this has the additional benefit of allowing to provide multiple overloads handling different types in different ways:
type
Foo = object
# handler all integer types:
template myTypeMapping(T: typedesc[SomeInteger]): typedesc = string
# specific handling of the Foo type:
template myTypeMapping(T: typedesc[Foo]): typedesc = seq[string]
# all sequence types are not re-mapped:
template myTypeMapping(T: typedesc[seq]): typedesc = T
Please note that you always need to specify that your template has a typedesc return type.

OCaml syntax: what does type 'a t mean?

This is about type definition in OCaml, I find the following syntax puzzling:
type 'a t
What does it mean in plain English?
Since OP has an experience in C++ language, I think the following explanation might be useful. A type declaration of the form:
type 'a t
is close to the C++
template <typename a> class t;
For example, 'a list is a generic list, and 'a is a type of an element. For concise, we use a single ', instead of template <typename _> construct. In the OCaml parlance, we use term "parametric polymorphism", instead of "generic programming". And instead of a word template, we say a type constructor. The latter has an interesting consequence. Like in C++ a template instantiation creates a new instances of types, in OCaml concertizing a type variable of a polymorphic type creates a new type, e.g., int list, float list (c.f., list<int>, float<list>). So, one can view type constructor 'a list as an unary function on a type level, it accepts a type, and creates a type. It is possible to have nary type constructors, e.g., type ('key, 'value) hashtbl is a binary type constructor, that creates a type for a given key and value pair. Moreover, we can see a non parametric types as a nullary type constructors, so int constructs the type int.
P.S. F# language, a descendant of OCaml, allows to write in both forms: int t and t<int>
P.P.S. To prevent a possible confusion, I would like to state, that although templates and parametric types are trying to solve the same problems, they still have a few differences. Templates are typed after the instantiation, parametric types before. So parametric type 'a t is defined for all 'a. If you want to create a type where a type variable is not universally quantified, you can use another mechanism - functors. They are also very close to templates, but they accept a type plus type requirements, a concept in C++ parlance. The concepts are reified in module types in OCaml, and thus a functor is actually a function on a module level, as it accepts a module and produces a module.
This a parametric type declaration.
A type declaration allows you to declare a new datatype:
type my_type = int * string
let x : my_type = (42,"Sorry for the inconvenience")
Sometimes however, you want a type to be parametric, meaning it takes another type as argument:
type 'a container = 'a * string * 'a
let x : int container = (0, "hello", 1)
let y : string container = ("stack", "over", "flow")
Now in that case, there is no equal after your type declaration. The meaning of that depends whether if it is in a module's structure (like, on top of a .ml file) or in a signature (e.g. in a .mli)
If it is in a structure, it declares a type with no value inside. Which is as useful as an empty set (sometimes it is, but not much). However, if it is in a signature, it means "there exists a parametric definition somewhere, but it is not visible from here".
Suppose there are those two files a.ml and a.mli:
(* a.ml *)
type 'a t = Nil | Cons of 'a * 'a t
let empty = Nil
let add x l = Cons (x,l)
(* and so on... *)
(* a.mli *)
type 'a t
val empty : 'a t
val add : 'a -> 'a t -> 'at
(* and so on... *)
If in the rest of your program you want to manipulate the A.t type, you'll be able to do so only through the empty and add and other defined functions, but not through directly using Nil and Cons.

Functor signature in OCaml

I'm a bit confused by the fact that (apparently) functor's signature in OCaml can be defined in two (seemingly) completely different ways. E.g. in the .mli file I can write:
module type A = sig
type a
end
module type SigA = sig
type t
end
module Make (MA : SigA) :
A with type a := MA.t
module type Make2 = functor (MA : SigA) -> A with type a := MA.t
As far as I understand, in the example above Make and Make2 are entirely identical signature, yet the syntax looks quite radically different.
Am I missing something? Is there any difference?
So is there a reason for two separate syntactic constructs? Historical reasons? IMO it is a bad practice to have two separate syntactic constructs serving the same purpose.
This is syntactic sugar, similar to that for functions (ie, let f x y = ... is shorthand for let f = fun x -> fun y -> ...). The motivation is presumably that in the long form multiple argument functors become quite hard to read:
module type A = sig end
module type B = sig end
module type C = sig end
module Foo (A:A) (B:B) (C:C) = struct
end
module Foo2 = functor (A:A) -> functor (B:B) -> functor (C:C) -> struct
end