My comprehension of the problem comes from Heilperin's et al. "Concrete Abstraction". I got that currying is the translation of the evaluation of a function that takes several arguments into evaluating a sequence of functions, each with a single argument. I have clear the semantic differences between the two approaches (can I call them this way?) but I am sure I did not grasp the practical implications behind the two approaches.
Please consider, in Ocaml:
# let foo x y = x * y;;
foo : int -> int -> int = <fun>
and
# let foo2 (x, y) = x * y;;
foo2 : int * int -> int = <fun>
The results will be the same for the two functions.
But, practically, what does make the two functions different? Readability? Computational efficiency? My lack of experience fails to give to this problem an adequate reading.
First of all, I would like to stress, that due to compiler optimizations the two functions above will be compiled into the same assembly code. Without the optimizations, the cost of currying would be too high, i.e., an application of a curried function would require allocating an amount of closures equal to the number of arguments.
In practice, curried function is useful, to define partial application. For example, cf.,
let double = foo 2
let double2 x = foo2 (2,x)
Another implication is that in a curried form, you do not need to allocate temporary tuples for the arguments, like in the example above, the function double2 will create an unnecessary tuple (2,x) every time it is called.
Finally, the curried form, actually simplifies reasoning about functions, as now, instead of having N families of N-ary functions, we have only unary functions. That allows, to type functions equally, for example, type 'a -> 'b is applicable to any function, e.g., int -> int, int -> int -> int, etc. Without currying, we would be required to add a number arguments into the type of a function, with all negative consequences.
With the first implementation you can define, for example
let double = foo 2
the second implementation can not be partially reused.
Related
I just spent an embarrassing amount of time figuring out that if you're passing a parameterized datatype into a higher-order function in SML, it needs to be in brackets (); so, for example:
fun f1 p = f2 p will work when called like this (for example): f1(Datatype(parameter)) but will not work if called like f1 Datatype(parameter). I'm sure there's a very simple reason why, but I'm not quite clear. Is it something like, the datatype and parameter are "seen" as 2 things by the function if not in brackets? Thanks!
It's important to realize how functions work in SML. Functions take a single argument, and return a single value. This is very easy to understand but it's very often practically necessary for a function to take more than one value as input. There are two ways of achieving this:
Tuples
A function can take one value that contains multiple values in the form of a tuple. This is very common in SML. Consider for instance:
fun add (x, y) = x + y
Here (x, y) is a tuple inferred to be composed of two ints.
Currying
A function takes one argument and returns one value. But functions are values in SML, so a function can return a function.
fun add x = fn y => x + y
Or just:
fun add x y = x + y
This is common in OCaml, but less common in SML.
Function Application
Function application in SML takes the form of functionName argument. When a tuple is involved, it looks like: functionName (arg1, arg2). But the space can be elided: functionName(arg1, arg2).
Even when tuples are not involved, we can put parentheses around any value. So calling a function with a single argument can look like: functionName argument, functionName (argument), or functionName(argument).
Your Question
f1(Datatype(parameter))
This parses the way you expect.
f1 Datatype(parameter)
This parses as f1 Datatype parameter, which is a curried function f1 applied to the arguments Datatype and parameter.
I'm currently learning SML and I have a question about something I have no name for. Lets call it "type alias" for the moment. Suppose I have the following datatype definition:
datatype 'a stack = Stack of 'a list;
I now want to add an explicit "empty stack" type. I can to this by adding it to the datatype:
datatype 'a stack = emptystack | Stack of 'a list;
Now I can pattern match a function like "push":
fun push (emptystack) (e:'a) = Stack([e])
| push (Stack(list):'a stack) (e:'a) = Stack(e::list);
The problem here is that Stack([]) and emptystack are different but I want them to be the same. So every time SML encounters an Stack([]) it should "know" that this is emptystack (in case of push it should then use the emptystack match).
Is there a way to achieve this?
The short answer is: No, it is not possible.
You can create type aliases with the code
type number = int
val foo : number -> int -> number =
fn a => fn b => a+b
val x : int = foo 1 3;
val y : number = foo 1 3;
However, as the name says, it only works for types. Your question goes for value constructors, which there is no syntax for.
Such an aliasing is not possible in SML.
Instead, you should design your datatypes to be unambiguous in their representation, if that is what you desire.
You'd probably be better suited with something that resembles the definition of 'a list more:
datatype 'a stack = EmptyStack | Stack of 'a * 'a stack;
This has the downside of not letting you use the list functions on it, but you do get an explicit empty stack constructor.
Since what you want is for one value emptystack to be synonymous with another value Stack [], you could call what you are looking for "value aliases". Values that are compared with the built-in operator = or pattern matching will not allow for aliases.
You can achieve this by creating your own equality operator, but you will lose the ability to use the built-in = (since Standard ML does not support custom operator overloading) as well as the ability to pattern match on the value constructors of your type.
Alternatively, you can construct a normal form for your type and always compare the normal form. Whenever practically feasible, follow Sebastian's suggestion of no ambiguity. There might be situations in which an unambiguous algebraic type will be much more complex than a simpler one that allows the same value to be represented in different ways.
I was translating the following Haskell code to OCaml:
data NFA q s = NFA
{ intialState :: q
, isAccepting :: q -> Bool
, transition :: q -> s -> [q]
}
Initially I tried a very literal translation:
type ('q,'s) nfa = NFA of { initialState: 'q;
isAccepting: 'q -> bool;
transition: 'q -> 's -> 'q list }
...and of course this gives a syntax error because the type constructor part, "NFA of" isn't allowed. It has to be:
type ('q,'s) nfa = { initialState: 'q;
isAccepting: 'q -> bool;
transition: 'q -> 's -> 'q list }
That got me to wondering why this is so. Why can't you have the type constructor for a record type just as you could for a tuple type (as below)?
type ('q, 's) dfa = NFA of ('q * ('q->bool) * ( 'q -> 's -> 'q list) )
Why would you want a type constructor for record types, except because that's your habit in Haskell?
In Haskell, records are not exactly first-class constructs: they are more like a syntactic sugar on top of tuples. You can define record fields name, use them as accessors, and do partial record update, but that desugars into access by positions in plain tuples. The constructor name is therefore necessary to tell one record from another after desugaring: if you had no constructor name, two records with different field names but the same field types would desugar into equivalent types, which would be a bad thing.
In OCaml, records are a primitive notion and they have their own identity. Therefore, they don't need a head constructor to distinguish them from tuples or records of the same field types. I don't see why you would like to add a head constructor, as this is more verbose without giving more information or helping expressivity.
Why can't you have the type constructor for a record type just as you could for a tuple type (as below)?
Be careful ! There is no tuple in the example you show, only a sum type with multiple parameters. Foo of bar * baz is not the same thing as Foo of (bar * baz): the former constructor has two parameters, and the latter constructor has only one parameter, which is a tuple. This differentiation is done for performances reasons (in memory, the two parameters are packed together with the constructor tag, while the tuple creates an indirection pointer). Using tuples instead of multi-parameters is slightly more flexible : you can match as both Foo (x, y) -> ... or Foo p -> ..., the latter not being available to multi-parameter constructors.
There is no asymmetry between tuples and records in that none of them has a special status in the sum type construction, which is only a sum of constructors of arbitrary arity. That said, it is easier to use tuples as parameter types for the constructors, as tuple types don't have to be declared to be used. Eg. some people have asked for the ability to write
type foo =
| Foo of { x : int; y : int }
| Bar of { z : foo list }
instead of the current
type foo = Foo of foo_t | Bar of bar_t
and foo_t = { x : int; y : int }
and bar_t = { z : foo list }
Your request is a particular (and not very interesting) case of this question. However, even with such shorthand syntax, there would still be one indirection pointer between the constructor and the data, making this style unattractive for performance-conscious programs -- what could be useful is the ability to have named constructor parameters.
PS: I'm not saying that Haskell's choice of desugaring records into tuples is a bad thing. By translating one feature into another, you reduce redundancy/overlapping of concepts. That said, I personally think it would be more natural to desugar tuples into records (with numerical field names, as done in eg. Oz). In programming language design, there are often no "good" and "bad" choices, only different compromises.
You can't have it because the language doesn't support it. It would actually be an easy fit into the type system or the data representation, but it would be a small additional complication in the compiler, and it hasn't been done. So yes, you have to choose between naming the constructor and naming the arguments.
Note that record label names are tied to a particular type, so e.g. {initialState=q} is a pattern of type ('q, 's) nfa; you can't usefully reuse the label name in a different type. So naming the constructor is only really useful when the type has multiple constructors; then, if you have a lot of arguments to a constructor, you may prefer to define a separate type for the arguments.
I believe there's a patch floating around for this feature, but I don't know if it's up-to-date for the latest OCaml version or anything, and it would require anyone using your code to have that patch.
Here's one thing I haven't seen explicitly addressed in C++ expression template programming in order to avoid building unnecessary temporaries (through creating trees of "inlinable templated objects" that only get collapsed at the assignment operator). Suppose for the illustration we're modeling 1-D sequences of values, with elementwise application of arithmetic operators like +, *, etc. Call the basic class for fully-created sequences Seq (which holds a fixed-length list of doubles for the sake of concreteness) and consider the following illustrative pseudo-C++-code.
void f(Seq &a,Seq &b,Seq &c,Seq &d,Seq &e){
AType t=(a+2*b)/(a+b+c); // question is about what AType can be
Seq f=d*t;
Seq g=e*e*t;
//do something with f and g
}
where there are expression templated overloads for +, etc, elsewhere. For the line defining t:
I can implement this code if I make AType be Seq, but then I've created this full intermediate variable when I don't need it (except in how it enables computation of f and g). But at least it's only calculated once.
I can also implement this making AType be the appropriate templated expression type, so that a full Seq isn't created at the commented line, but consumed chunk-by-chunk in f and g. But then the same computation involved in creating every particular chunk will be repeated in both f and g. (I suppose in theory an incredibly smart compiler might realise the same computation is being done twice and CSE-it, but I don't think any do and I wouldn't want to rely on an optimiser always being able to spot the opportunities.)
My understanding is that there's no clever code rewriting and/or usage of templates that allow each chunk of t to be calculated only once and for t to be calculated chunkwise rather than all at once?
(I can vaguely imagine AType could be some kind of object that contains both an expression template type and a cached value that gets written after it's evaluated the first time, but that doesn't seem to help with the need to synchronise the two implicit loops in the assignments to f and g.)
In googling, I have come across one Masters thesis on another subject that mentions in passing that manual "common subexpression elimination" should be avoided with expression templates, but I'd like to find a more authoritative "it's not possible" or a "here's how to do it".
The closest stackoverflow question is Intermediate results using expression templates
which seems to be about the type-naming issue rather than the efficiency issue in creating a full intermediate.
Since you obviously don't want to do the entire calculation twice, you have to cache it somehow. The easiest way to cache it seems to be for AType to be a Seq. You say This has the downside of a full intermediate variable, but that's exactly what you want in this case. That full intermediate is your cache, and cannot be trivially avoided.
If you profile the code and this is a chokepoint, then the only faster way I can think of is to write a special function to calculate f and g in parallell, but that'd be super-confusing, and very much not recommended.
void g(Seq &d, Seq &e, Expr &t, Seq &f, Seq &g)
{
for(int i=0; i<d.size(); ++i) {
auto ti = t[i];
f[i] = d[i]*ti;
g[i] = e[i]*e[i]*ti;
}
}
void f(Seq &a,Seq &b,Seq &c,Seq &d,Seq &e)
{
Expr t = (a+2*b)/(a+b+c);
Seq f, g;
g(d, e, t, f, g);
//do something with f and g
}
A monad is defined as an endofunctor on category C. Let's say, C has type int and bool and other constructed types as objects. Now let's think about the list monad defined over this category.
By it's very definition list then is an endofunctor, it maps (can this be interpreted as a function?) an int type into List[int] and bool to List[bool] of and maps (again a function?) a morphism int -> bool to
List[int] -> List[bool]
So, far, it kind of makes sense. But what throws me into deep confusion is the additional definitions of natural transformations that need to accompany it:
a. Unit...that transforms int into List[int] (doesn't the definition of List functor already imply this? This is one major confusion I have
b. Does the List functor always have to be understood as mapping from int to List[int] not from int to List[bool]?
c. Is the unit natural transformation int to List[int] different from map from int to List[int] implied by defining List as a functor? I guess this is just re-statement of my earlier question.
Unit is a natural transformation from the Identity functor on C to List; in general, a natural transformation a: F => G between two parallel functors F,G : X -> Y consists of
for each object x: X of the domain, a morphism a_x : Fx -> Gx
plus a naturality condition relating the action of F and G on morphisms
you should thought of a natural transformation as above as a way of "going" from F to G. Applying this to your unit for List situation, Unit specifies for each type X a function Unit_X : X -> List[X], and this is just viewing instances of your type as List[X] instances with one element.
I don't understand what you're asking exactly on b. but with respect to c. they're completely different things. There is no map from int to List[int] implied at the definition; what the definition gives you is, for each map f: X -> Y, a map List(f) : List[X] -> List[Y]; what Unit gives you is a way of viewing any type X as some particular kind of Lists of X's, those with one element.
Hope it helps; from the List[] notation you use, maybe you come from a Scala/Java background, if this is the case you may find this intro to category theory in Scala interesting: http://www.weiglewilczek.com/blog/?p=2760
Well, what is really confusing is, functor F between Cat A and Cat B isdefined as:
a mapping:
F maps A to F(A) --- does it mean new List()? or why not?
and F maps F(f) : F(A) -> F(B)
This is how I see those as being defined in the books. Point #1 above (F maps A to F(A)) - that reads to me like a morphism to convert A into F(A). If that is the case, why do we need unit natural transformation, to go from A to F(A)?
What is very curious is that the functor definition uses the word map (but does not use the word morphism). I see that A to F(A) is not called a morphism but a map.