Simple question re: Passing a parameterized datatype in SML - sml

I just spent an embarrassing amount of time figuring out that if you're passing a parameterized datatype into a higher-order function in SML, it needs to be in brackets (); so, for example:
fun f1 p = f2 p will work when called like this (for example): f1(Datatype(parameter)) but will not work if called like f1 Datatype(parameter). I'm sure there's a very simple reason why, but I'm not quite clear. Is it something like, the datatype and parameter are "seen" as 2 things by the function if not in brackets? Thanks!

It's important to realize how functions work in SML. Functions take a single argument, and return a single value. This is very easy to understand but it's very often practically necessary for a function to take more than one value as input. There are two ways of achieving this:
Tuples
A function can take one value that contains multiple values in the form of a tuple. This is very common in SML. Consider for instance:
fun add (x, y) = x + y
Here (x, y) is a tuple inferred to be composed of two ints.
Currying
A function takes one argument and returns one value. But functions are values in SML, so a function can return a function.
fun add x = fn y => x + y
Or just:
fun add x y = x + y
This is common in OCaml, but less common in SML.
Function Application
Function application in SML takes the form of functionName argument. When a tuple is involved, it looks like: functionName (arg1, arg2). But the space can be elided: functionName(arg1, arg2).
Even when tuples are not involved, we can put parentheses around any value. So calling a function with a single argument can look like: functionName argument, functionName (argument), or functionName(argument).
Your Question
f1(Datatype(parameter))
This parses the way you expect.
f1 Datatype(parameter)
This parses as f1 Datatype parameter, which is a curried function f1 applied to the arguments Datatype and parameter.

Related

Currying: practical implications

My comprehension of the problem comes from Heilperin's et al. "Concrete Abstraction". I got that currying is the translation of the evaluation of a function that takes several arguments into evaluating a sequence of functions, each with a single argument. I have clear the semantic differences between the two approaches (can I call them this way?) but I am sure I did not grasp the practical implications behind the two approaches.
Please consider, in Ocaml:
# let foo x y = x * y;;
foo : int -> int -> int = <fun>
and
# let foo2 (x, y) = x * y;;
foo2 : int * int -> int = <fun>
The results will be the same for the two functions.
But, practically, what does make the two functions different? Readability? Computational efficiency? My lack of experience fails to give to this problem an adequate reading.
First of all, I would like to stress, that due to compiler optimizations the two functions above will be compiled into the same assembly code. Without the optimizations, the cost of currying would be too high, i.e., an application of a curried function would require allocating an amount of closures equal to the number of arguments.
In practice, curried function is useful, to define partial application. For example, cf.,
let double = foo 2
let double2 x = foo2 (2,x)
Another implication is that in a curried form, you do not need to allocate temporary tuples for the arguments, like in the example above, the function double2 will create an unnecessary tuple (2,x) every time it is called.
Finally, the curried form, actually simplifies reasoning about functions, as now, instead of having N families of N-ary functions, we have only unary functions. That allows, to type functions equally, for example, type 'a -> 'b is applicable to any function, e.g., int -> int, int -> int -> int, etc. Without currying, we would be required to add a number arguments into the type of a function, with all negative consequences.
With the first implementation you can define, for example
let double = foo 2
the second implementation can not be partially reused.

How to pass list as variable arguments function in Maxima?

In Maxima there are some function accept variable arguments like diag_matrix( a1,a2,...,an), which is used to create a diagonal matrix with diagonal elements are a1,...,an
However, currently I have a list of [a1,a2,..an] and want to create a diagonal matrix from it.
diag_matrix cannot accept the list directly; Is there anyway to utilize diag_matrix to create the matrix ?
In general, the expression apply(foo, [a1, ..., an]) applies the function foo to the list of arguments [a1, ..., an].
In particular, apply(diag_matrix, [a1, ..., an]) applies diag_matrix to [a1, ..., an]. I think that's what you want here.
Note that apply evaluates all of its arguments, even if foo quotes its arguments, or foo evaluates to something other than itself. Therefore there is an "apply defeats quoting" idiom in Maxima, which is often useful.

No type constructor for record types?

I was translating the following Haskell code to OCaml:
data NFA q s = NFA
{ intialState :: q
, isAccepting :: q -> Bool
, transition :: q -> s -> [q]
}
Initially I tried a very literal translation:
type ('q,'s) nfa = NFA of { initialState: 'q;
isAccepting: 'q -> bool;
transition: 'q -> 's -> 'q list }
...and of course this gives a syntax error because the type constructor part, "NFA of" isn't allowed. It has to be:
type ('q,'s) nfa = { initialState: 'q;
isAccepting: 'q -> bool;
transition: 'q -> 's -> 'q list }
That got me to wondering why this is so. Why can't you have the type constructor for a record type just as you could for a tuple type (as below)?
type ('q, 's) dfa = NFA of ('q * ('q->bool) * ( 'q -> 's -> 'q list) )
Why would you want a type constructor for record types, except because that's your habit in Haskell?
In Haskell, records are not exactly first-class constructs: they are more like a syntactic sugar on top of tuples. You can define record fields name, use them as accessors, and do partial record update, but that desugars into access by positions in plain tuples. The constructor name is therefore necessary to tell one record from another after desugaring: if you had no constructor name, two records with different field names but the same field types would desugar into equivalent types, which would be a bad thing.
In OCaml, records are a primitive notion and they have their own identity. Therefore, they don't need a head constructor to distinguish them from tuples or records of the same field types. I don't see why you would like to add a head constructor, as this is more verbose without giving more information or helping expressivity.
Why can't you have the type constructor for a record type just as you could for a tuple type (as below)?
Be careful ! There is no tuple in the example you show, only a sum type with multiple parameters. Foo of bar * baz is not the same thing as Foo of (bar * baz): the former constructor has two parameters, and the latter constructor has only one parameter, which is a tuple. This differentiation is done for performances reasons (in memory, the two parameters are packed together with the constructor tag, while the tuple creates an indirection pointer). Using tuples instead of multi-parameters is slightly more flexible : you can match as both Foo (x, y) -> ... or Foo p -> ..., the latter not being available to multi-parameter constructors.
There is no asymmetry between tuples and records in that none of them has a special status in the sum type construction, which is only a sum of constructors of arbitrary arity. That said, it is easier to use tuples as parameter types for the constructors, as tuple types don't have to be declared to be used. Eg. some people have asked for the ability to write
type foo =
| Foo of { x : int; y : int }
| Bar of { z : foo list }
instead of the current
type foo = Foo of foo_t | Bar of bar_t
and foo_t = { x : int; y : int }
and bar_t = { z : foo list }
Your request is a particular (and not very interesting) case of this question. However, even with such shorthand syntax, there would still be one indirection pointer between the constructor and the data, making this style unattractive for performance-conscious programs -- what could be useful is the ability to have named constructor parameters.
PS: I'm not saying that Haskell's choice of desugaring records into tuples is a bad thing. By translating one feature into another, you reduce redundancy/overlapping of concepts. That said, I personally think it would be more natural to desugar tuples into records (with numerical field names, as done in eg. Oz). In programming language design, there are often no "good" and "bad" choices, only different compromises.
You can't have it because the language doesn't support it. It would actually be an easy fit into the type system or the data representation, but it would be a small additional complication in the compiler, and it hasn't been done. So yes, you have to choose between naming the constructor and naming the arguments.
Note that record label names are tied to a particular type, so e.g. {initialState=q} is a pattern of type ('q, 's) nfa; you can't usefully reuse the label name in a different type. So naming the constructor is only really useful when the type has multiple constructors; then, if you have a lot of arguments to a constructor, you may prefer to define a separate type for the arguments.
I believe there's a patch floating around for this feature, but I don't know if it's up-to-date for the latest OCaml version or anything, and it would require anyone using your code to have that patch.

Why define the unit natural transformation for a monad - isn't this implied by the definition of monad being an endofunctor?

A monad is defined as an endofunctor on category C. Let's say, C has type int and bool and other constructed types as objects. Now let's think about the list monad defined over this category.
By it's very definition list then is an endofunctor, it maps (can this be interpreted as a function?) an int type into List[int] and bool to List[bool] of and maps (again a function?) a morphism int -> bool to
List[int] -> List[bool]
So, far, it kind of makes sense. But what throws me into deep confusion is the additional definitions of natural transformations that need to accompany it:
a. Unit...that transforms int into List[int] (doesn't the definition of List functor already imply this? This is one major confusion I have
b. Does the List functor always have to be understood as mapping from int to List[int] not from int to List[bool]?
c. Is the unit natural transformation int to List[int] different from map from int to List[int] implied by defining List as a functor? I guess this is just re-statement of my earlier question.
Unit is a natural transformation from the Identity functor on C to List; in general, a natural transformation a: F => G between two parallel functors F,G : X -> Y consists of
for each object x: X of the domain, a morphism a_x : Fx -> Gx
plus a naturality condition relating the action of F and G on morphisms
you should thought of a natural transformation as above as a way of "going" from F to G. Applying this to your unit for List situation, Unit specifies for each type X a function Unit_X : X -> List[X], and this is just viewing instances of your type as List[X] instances with one element.
I don't understand what you're asking exactly on b. but with respect to c. they're completely different things. There is no map from int to List[int] implied at the definition; what the definition gives you is, for each map f: X -> Y, a map List(f) : List[X] -> List[Y]; what Unit gives you is a way of viewing any type X as some particular kind of Lists of X's, those with one element.
Hope it helps; from the List[] notation you use, maybe you come from a Scala/Java background, if this is the case you may find this intro to category theory in Scala interesting: http://www.weiglewilczek.com/blog/?p=2760
Well, what is really confusing is, functor F between Cat A and Cat B isdefined as:
a mapping:
F maps A to F(A) --- does it mean new List()? or why not?
and F maps F(f) : F(A) -> F(B)
This is how I see those as being defined in the books. Point #1 above (F maps A to F(A)) - that reads to me like a morphism to convert A into F(A). If that is the case, why do we need unit natural transformation, to go from A to F(A)?
What is very curious is that the functor definition uses the word map (but does not use the word morphism). I see that A to F(A) is not called a morphism but a map.

Boost.Bind - understanding placeholders

I am trying to understand the following example, that is similar (but not equal) to the one posted earlier on the SO Help understanding boost::bind placeholder arguments :
#include <boost/bind.hpp>
#include <functional>
struct X {
int value;
};
int main() {
X a = { 1 };
X b = { 2 };
boost::bind(std::less<int>(),
boost::bind(&X::value, _1),
boost::bind(&X::value, _2))
(a, b);
}
How is this possible, that the outer-most bind function knows that it has to pass the first argument to the second bind (that expects _1), and the second argument to the third bind (that expects _2)? The way I see this is that the inner binders are evaluated first, so they become two unary functional objects, that are later passed to the binder of less<int> object. And when the newly created functional object is invoked with two objects, a goes to the first inner-bind, and b goes to the second. If I were right, we would use _1 twice. I must be wrong. I will repeat my question once again to make my problem clear: how the outer binder knows which placeholder was used in which inner binder?
arguments are packed in tuple (a,b) and passed to functors. then inner functor decides which tuple element it needs, e.g. try:
boost::bind(&X::value, _1)(a,b)
boost::bind(&X::value, _2)(a,b)
More generally, every value, regardless if it is constant/reference/placeholder is represented as functor which takes argument tuple and returns value.
bind(f, 10)(a) // still functor which discards arguments
Now, I am not a hundred percent sure this is how bind does it.
however, this is how phoenix implement its functionality. if you are trying to understand mechanism of bind/lambda implementation, look at phoenix, it is very extensible and has excellent documentation.