Why the boilerplates when writing new Monad Transformers - monads

This section http://book.realworldhaskell.org/read/monad-transformers.html#id659032 from the book Real World Haskell suggests that when writing a new Monad Transformer, we have to derive instances for MonadState, MonadIO, etc. manually.
But I tried the following and it compiled. Why is it not done in the library?
Say I have the MaybeT monad transformers:
newtype MaybeT m a = MaybeT {
runMaybeT :: m (Maybe a)
}
instance Monad m => Monad (MaybeT m) where -- blah blah
instance MonadTrans MaybeT where
lift = MaybeT . (liftM Just)
Then once we know that t is a MonadTrans and m is a Monad, why can't everything else be automatically derived like this?
instance (MonadTrans t, Monad (t m), MonadIO m) => MonadIO (t m) where
liftIO = lift . liftIO
instance (MonadTrans t, Monad (t m), MonadState s m) => MonadState s (t m) where
get = lift get
put = lift . put
Does the author mean we have to do this manually for each new MonadTrans or I get him wrong?
Thank you very much :)

The reason why they don't do this is very simple:
First, it would break a lot of old code if they would add this, because you need some stuff like UndecidableInstances to let GHC decide between the automatic and the manually defined instance. This would be very cumbersome.
What, if you want to define an instance different from the above, maybe for performance reasons or to do some hack? I think this little boilerplate is preferable over the inability / higher cost (because of the trickery to tell GHC which instance you want) of defining customized instances, if this instance would be builtin.

Related

An alternative Alternative for lists

A few times now, I've found myself defining:
(<?>) :: [a] -> [a] -> [a]
[] <?> ys = ys
xs <?> _ = xs
This is an associative operation, of course, and the empty list [] is both left- and right-identity. It functions like Python's or.
It seems to me that this would make a nice (<|>), better than (++) would. Choosing the first nonempty list feels more like what I would expect from a typeclass named Alternative than concatenating lists. Admittedly, it doesn't fit MonadPlus as well, but I think that a small price to pay for salvation. We already have (++) and (<>) in the standard library; do we need another synonym, or would a new function (as far as I can tell) be more helpful?
I was at first thinking this might be a good Alternative instance for ZipList, but the discussion following this answer on the relevant question has convinced me otherwise. Other than backwards-compatibility and keeping MonadPlus sensible, what arguments are there for the current instance rather than this new one?
It is tricky to give your question a straight answer. Considered in isolation, there is nothing fundamentally wrong with your proposed instance. Still, there are quite a few things that can be said in support of the existing Alternative instance for lists.
Admittedly, it doesn't fit MonadPlus as well, but I think that a small price to pay for salvation.
One problem with going down that route is that Alternative is meant to capture the same general concept that MonadPlus does, but in terms of Applicative rather than Monad. To quote a relevant answer by Edward Kmett:
Effectively, Alternative is to Applicative what MonadPlus is to Monad.
From that point of view, having mismatching Alternative and MonadPlus instances is confusing and misleading, much like the analogous situation with Applicative and Monad instances would be.
(A possible counter to this argument would be wondering why do we need to care about MonadPlus anyway, given that it expresses the same concepts and offers essentially the same methods as Alternative. It should be noted, though, that the MonadPlus laws are stronger than the Alternative ones, as the relevant interactions of its methods with the Monad ones aren't expressible in terms of Alternative. That being so, MonadPlus still has meaning of its own, and a conceivable outcome of a hypothetical reform of the classes would be retaining it as a laws-only class, as discussed for instance in the final section of this answer by Antal Spector-Zabusky.)
Given such considerations, in what follows I will assume the continued relevance of MonadPlus. That makes writing the rest of this answer much easier, as MonadPlus is the original expression of the general concept in Haskell, and so it is pretty much necessary to refer to it while tracing the origin of the list instance of Alternative.
It seems to me that this would make a nice (<|>), better than (++) would. Choosing the first nonempty list feels more like what I would expect from a typeclass named Alternative than concatenating lists.
Tracing the roots of MonadPlus and Alternative, though, shows that the concatenating list instance is not just well-established, but even paradigmatic. For instance, quoting the classic paper by Hutton and Meijer, Monadic parsing in Haskell (1998), p. 4:
That is, a type constructor m is a member of the class MonadZero if it is a member of the class Monad, and if it is also equipped with a value zero of the specified type. In a similar way, the class MonadPlus builds upon the class MonadZero by adding a (++) operation of the specified type.
(Note that the authors do use (++) as their name for mplus.)
The notion mplus captures here is that of non-deterministic choice: if computations u and v each have some possible results, the possible results of u `mplus` v will be all of the possible results of u and v. The most elementary realisation of that is through MonadPlus for lists, though the idea extends to cover other non-determinism monads, such as Hutton and Meijer's Parser:
newtype Parser a = Parser (String -> [(a,String)])
To spin it another way, we might describe non-deterministic choice as inclusive disjunction, while the operation you propose is a form of (left-biased) exclusive choice. (It is worth noting that Hutton and Meijer also define (+++), a deterministic choice operator for their Parser which is rather like your operator except that it only picks the first result of the first successsful computation.)
A further relevant observation: one of the monad transformers from transformers that doesn't have a mtl class counterpart is ListT. That is so because the class which generalises the ListT functionality is precisely MonadPlus. Quoting a Gabriella Gonzalez comment:
MonadPlus is basically the "list monad" type class. For example: cons a as = return a `mplus` as and nil = mzero.
Note that the brokenness of transformers' ListT is not an issue. In general, the various formulations of ListT-done-right are equipped with a concatenating MonadPlus instance (examples: one, two, three).
So much for reasons why we might want to leave the Alternative [] and MonadPlus [] instances as they are. Still, this answer would be lacking if it didn't recognise that, as Will Ness reminds us, there are multiple reasonable notions of choice, and your operator embodies one of them.
The "official" laws (that is, the ones actually mentioned by the documentation) of Alternative and MonadPlus don't specify a single notion of choice. That being so, we end up with both non-deterministic (e.g. mplus #[]) and deterministic (e.g. mplus #Maybe) choice instances under the same Alternative/MonadPlus umbrella. Furthermore, if one chose to disregard my argument above and replace mplus #[] with your operator, nothing in the "official" laws would stop them. Over the years, there has been some talk of reforming MonadPlus by splitting it into classes with extra laws, in order to separate the different notions of choice. The odds of such a reform actually happening, though, don't seem high (lots of churn for relatively little practical benefit).
For the sake of contrast, it is interesting to consider the near-semiring interpretation, which is one of the reimaginings of MonadPlus and Alternative that might be invoked in a hypothetical class hierarchy reform. For a fully fleshed account of it, see Rivas, Jaskelioff and Schrijvers, A Unified View of Monadic and Applicative Non-determinism (2018). For our current purposes, it suffices to note the interpretation tailors the classes to non-deterministic choice by adding, to the monoid laws, "left zero" and "left distribution" laws for Alternative...
empty <*> x = empty
(f <|> g) <*> x = (f <*> x) <|> (g <*> x)
... as well as for MonadPlus:
mzero >>= k = mzero
(m1 `mplus` m2) >>= k = (m1 >>= k) `mplus` (m2 >>= k)
(Those MonadPlus laws are strictly stronger than their Alternative counterparts.)
In particular, your choice operator follows the purported Alternative left distribution law, but not the MonadPlus one. In that respect, it is similar to mplus #Maybe. MonadPlus left distribution makes it difficult (probably impossible, though I don't have a proof at hand right now) to drop any results in mplus, as we can't tell, on the right hand side of the law, whether m1 >>= k or m2 >>= k will fail without inspecting the results of m1 and m2. To conclude this answer with something tangible, here is a demonstration of this point:
-- Your operator.
(<?>) :: [a] -> [a] -> [a]
[] <?> ys = ys
xs <?> _ = xs
filter' :: (a -> Bool) -> [a] -> [a]
filter' p xs = xs >>= \x -> if p x then [x] else []
-- If MonadPlus left distribution holds, then:
-- filter' p (xs `mplus` ys) = filter' p xs `mplus` filter' p ys
GHCi> filter' even ([1,3,5] <|> [0,2,4])
[0,2,4]
GHCi> filter' even [1,3,5] <|> filter' even [0,2,4]
[0,2,4]
GHCi> filter' even ([1,3,5] <?> [0,2,4])
[]
GHCi> filter' even [1,3,5] <?> filter' even [0,2,4]
[0,2,4]

Flat lists and free monads

I am trying to convince myself that the List monad (the one with flat lists, concatenation of lists and map element-wise) is not a free monad (to be precise, the free monad associated to some functor T). As far as I understand, I should be able to achieve that by
first finding a relation in the monad List between the usual operators fmap, join etc,
then showing that this relation does not hold in any free monad over a functor T, for all T.
What is a peculiar relation that holds in the List monad, that sets it apart from the free monads? How can I handle step2 if I don't know what T is? Is there some other strategy to show that flat lists are not free?
As a side note, to dispell any terminology clash, let me remark that the free monad associated to the pair functor is a tree monad (or a nested list monad), it is not the flat List monad.
Edit: for people acquainted with the haskell programming language, the question can be formulated as follows: how to show that there is no functor T such that List a = Free T a (for all T and up to monad isomorphism)?
(Adapted from my post in a different thread.)
Here is a full proof why the list monad is not free, including a bit of context.
Recall that we can construct, for any functor f, the free monad over f:
data Free f a = Pure a | Roll (f (Free f a))
Intuitively, Free f a is the type of f-shaped trees with leaves of type a.
The join operation merely grafts trees together and doesn't perform any
further computations. Values of the form (Roll _) shall be called
"nontrivial" in this posting. The task is to show that for no functor f, the monad Free f is isomorphic to the list monad.
The intuitive reason why this is true is because the
join operation of the list monad (concatenation) doesn't merely graft
expressions together, but flattens them.
More specifically, in the free monad over any functor, the result of binding a nontrivial
action with any function is always nontrivial, i.e.
(Roll _ >>= _) = Roll _
This can be checked directly from the definition of (>>=) for the free
monad.
If the list monad would be isomorphic-as-a-monad to the free monad over some
functor, the isomorphism would map only singleton lists [x] to values of
the form (Pure _) and all other lists to nontrivial values. This is
because monad isomorphisms have to commute with "return" and return x
is [x] in the list monad and Pure x in the free monad.
These two facts contradict each other, as can be seen with the following
example:
do
b <- [False,True] -- not of the form (return _)
if b
then return 47
else []
-- The result is the singleton list [47], so of the form (return _).
After applying a hypothetical isomorphism to the free monad over some
functor, we'd have that the binding of a nontrivial value (the image
of [False,True] under the isomorphism) with some function results in a
trivial value (the image of [47], i.e. return 47).
If you're okay with the free monad being applied to a type in particular which seems to be the case given the way you consider the Nat example given in the comments, then List can indeed be described using Free:
type List a = Free ((,) a) ()
The underlying idea here is that a List a is a Nat where each Suc node has been labelled with an a (hence the use of the (a,) functor rather than the Identity one).
Here is a small module witnessing the isomorphism together with an example:
module FreeList where
import Data.Function
import Control.Monad.Free
type List a = Free ((,) a) ()
toList :: [a] -> List a
toList = foldr (curry Free) (Pure ())
fromList :: List a -> [a]
fromList = iter (uncurry (:)) . fmap (const [])
append :: List a -> List a -> List a
append xs ys = xs >>= const ys
example :: [Integer]
example = fromList $ (append `on` toList) [1..5] [6..10]

Function- and Type substitutions or Views in Coq

I proved some theorems about lists, and extracted algorithms from them. Now I want to use heaps instead, because lookup and concatenation are faster. What I currently do to achieve this is to just use custom definitions for the extracted list type. I would like to do this in a more formal way, but ideally without having to redo all of my proofs. Lets say I have a type
Heap : Set -> Set
and an isomorphism
f : forall A, Heap A -> List A.
Furthermore, I have functions H_app and H_nth, such that
H_app (f a) (f b) = f (a ++ b)
and
H_nth (f a) = nth a
On the one hand, I would have to replace every list-recursion by a specialized function that mimics list recursion. On the other hand, beforehand I would want to replace ++ and nth by H_app and H_nth, so the extracted algorithms would be faster. The problem is that I use tactics like simpl and compute in some places, which will probably fail if I just replace everything in the proof code. It would be good to have a possibility to "overload" the functions afterwards.
Is something like this possible?
Edit: To clarify, a similar problem arises with numbers: I have some old proofs that use nat, but the numbers are getting too large. Using BinNat would be better, but is it possible to use BinNat instead of nat also in the old proofs without too much modification? (And especially, replace inefficient usages of + by the more efficient definition for BinNat?)
Just for the sake of clarity, I take it that Heap must look like
this:
Inductive Heap A : Type :=
| Node : Heap A -> A -> Heap A -> Heap A
| Leaf : Heap A.
with f being defined as
Fixpoint f A (h : Heap A) : list A :=
match h with
| Node h1 a h2 => f h1 ++ a :: f h2
| Leaf => []
end.
If this is the case, then f does not define an isomorphism between
Heap A and list A for all A. Instead, we can find a function
g : forall A, list A -> Heap A such that
forall A (l : list A), f (g l) = l
Nevertheless, we would like to say that both Heap and list are
equivalent in some sense when they are used to implement the same
abstraction, namely sets of elements of some type.
There is a precise and formal way in which we can validate this idea
in languages that have parametric polymorphism, such as Coq. This
principle, known as parametricity, roughly says that
parametrically polymorphic functions respect relations that we impose
on types we instantiate them with.
This is a little bit abstract, so let's try to make it more
concrete. Suppose that you have a function over lists (say, foo)
that uses only ++ and nth. To be able to replace foo by an
equivalent version on Heap using parametricity, we need to make
foo's definition polymorphic, abstracting over the functions over
lists:
Definition foo (T : Set -> Set)
(app : forall A, T A -> T A -> T A)
(nth : forall A, T A -> nat -> option A)
A (l : T A) : T A :=
(* ... *)
You would first prove properties of foo by instantiating it over
lists:
Definition list_foo := foo list #app #nth.
Lemma list_foo_lemma : (* Some statement *).
Now, because we now that H_app and H_nth are compatible with their
list counterparts, and because foo is polymorphic, the theory of
parametricity says that we can prove
Definition H_foo := foo Heap #H_app #H_nth.
Lemma foo_param : forall A (h : Heap A),
f (H_foo h) = list_foo (f h).
with this lemma in hand, it should be possible to transport properties
of list_foo to similar properties of H_foo. For instance, as a
trivial example, we can show that H_app is associative, up to
conversion to a list:
forall A (h1 h2 h3 : Heap A),
list_foo (H_app h1 (H_app h2 h3)) =
list_foo (H_app (H_app h1 h2) h3).
What's nice about parametricity is that it applies to any
parametrically polymorphic function: as long as appropriate
compatibility conditions hold of your types, it should be possible to
relate two instantiations of a given function in a similar fashion to
foo_param.
There are two problems, however. The first one is having to change
your base definitions to polymorphic ones, which is probably not so
bad. What's worse, though, is that even though parametricity ensures
that it is always possible to prove lemmas such as foo_param under
certain conditions, Coq does not give you that for free, and you still
need to show these lemmas by hand. There are two things that could help
alleviate your pain:
There's a parametricity plugin for Coq (CoqParam) which should
help deriving the boilerplate proofs for you automatically. I have
never used it, though, so I can't really say how easy it is to use.
The Coq Effective Algebra Library (or CoqEAL, for short) uses
parametricity to prove things about efficient algorithms while
reasoning over more convenient ones. In particular, they define
refinements that allow one to switch between nat and BinNat, as
you suggested. Internally, they use an infrastructure based on
type-class inference, which you could adapt to your original
example, but I heard that they are currently migrating their
implementation to use CoqParam instead.

Why do we have map, fmap and liftM?

map :: (a -> b) -> [a] -> [b]
fmap :: Functor f => (a -> b) -> f a -> f b
liftM :: Monad m => (a -> b) -> m a -> m b
Why do we have three different functions that do essentially the same thing?
map exists to simplify operations on lists and for historical reasons (see What's the point of map in Haskell, when there is fmap?).
You might ask why we need a separate map function. Why not just do away with the current
list-only map function, and rename fmap to map instead? Well, that’s a good question. The
usual argument is that someone just learning Haskell, when using map incorrectly, would much
rather see an error about lists than about Functors.
-- Typeclassopedia, page 20
fmap and liftM exist because monads were not automatically functors in Haskell:
The fact that we have both fmap and liftM is an
unfortunate consequence of the fact that the Monad type class does not require
a Functor instance, even though mathematically speaking, every monad is a
functor. However, fmap and liftM are essentially interchangeable, since it is
a bug (in a social rather than technical sense) for any type to be an instance
of Monad without also being an instance of Functor.
-- Typeclassopedia, page 33
Edit: agustuss's history of map and fmap:
That's not actually how it happens. What happened was that the type of map was generalized to cover Functor in Haskell 1.3. I.e., in Haskell 1.3 fmap was called map. This change was then reverted in Haskell 1.4 and fmap was introduced. The reason for this change was pedagogical; when teaching Haskell to beginners the very general type of map made error messages more difficult to understand. In my opinion this wasn't the right way to solve the problem.
-- What's the point of map in Haskell, when there is fmap?

`Ord a =>` or `Num a =>`

I have the following functions:
which (x:xs) = worker x xs
worker x [] = x
worker x (y:ys)
| x > y = worker y ys
| otherwise = worker x ys
and am wondering how I should define the types signatures of these above functions which and worker?
For Example, which of the following ways would be best as a type signature for worker?
worker :: Num a => a -> [a] -> a,
or
worker :: Ord a => a -> [a] -> a?
I'm just really confused and don't get which these three I should choose. I'd appreciate your thoughts. Thanks.
If you define the function without an explicit type signature, Haskell will infer the most general one. If you’re unsure, this is the easiest way to figure out how your definition will be read; you can then copy it into your source code. A common mistake is incorrectly typing a function and then getting a confusing type error somewhere else.
Anyway, you can get info on the Num class by typing :i Num into ghci, or by reading the documentation. The Num class gives you +, *, -, negate, abs, signum, fromInteger, as well as every function of Eq and Show. Notice that < and > aren’t there! Requiring values of Num and attempting to compare them will in fact produce a type error — not every kind of number can be compared.
So it should be Ord a => ..., as Num a => ... would produce a type error if you tried it.
If you think about what your functions do, you'll see that which xs returns the minimum value in xs. What can have a minimum value? A list of something Orderable!
Ask ghci and see what it says. I just copy-pasted your code as is into a file and loaded it into ghci. Then I used :t which is a special ghci command to determine the type of something.
ghci> :t which
which :: (Ord t) => [t] -> t
ghci> :t worker
worker :: (Ord a) => a -> [a] -> a
Haskell's type inference is pretty smart in most cases; learn to trust it. Other answers sufficiently cover why Ord should be used in this case; I just wanted to make sure ghci was clearly mentioned as a technique for determining the type of something.
I would always go with the Ord type constraint. It is the most general, so it can be reused more often.
There is no advantage to using Num over Ord.
Int may have a small advantage as it is not polymorphic and would not require a dictionary lookup. I would stil use Ord and use the specialize pragma if I needed to for performance.
Edit: Altered my answer after comments.
It depends on what you want to being able to compare. If you want to being able to compare Double, Float, Int, Integer and Char then use Ord. If you only want to being able to compare Int then just use Int.
If you have another problem like this, just look at the instances of the type class to tell which types you want to be able to use in the function.
Ord documentation