You don't offen see Maybe List except for error-handling for example, because lists are a bit Maybe themselves: they have their own "Nothing": [] and their own "Just": (:).
I wrote a list type using Maybe and functions to convert standard and to "experimental" lists. toStd . toExp == id.
data List a = List a (Maybe (List a))
deriving (Eq, Show, Read)
toExp [] = Nothing
toExp (x:xs) = Just (List x (toExp xs))
toStd Nothing = []
toStd (Just (List x xs)) = x : (toStd xs)
What do you think about it, as an attempt to reduce repetition, to generalize?
Trees too could be defined using these lists:
type Tree a = List (Tree a, Tree a)
I haven't tested this last piece of code, though.
All ADTs are isomorphic (almost--see end) to some combination of (,),Either,(),(->),Void and Mu where
data Void --using empty data decls or
newtype Void = Void Void
and Mu computes the fixpoint of a functor
newtype Mu f = Mu (f (Mu f))
so for example
data [a] = [] | (a:[a])
is the same as
data [a] = Mu (ListF a)
data ListF a f = End | Pair a f
which itself is isomorphic to
newtype ListF a f = ListF (Either () (a,f))
since
data Maybe a = Nothing | Just a
is isomorphic to
newtype Maybe a = Maybe (Either () a)
you have
newtype ListF a f = ListF (Maybe (a,f))
which can be inlined in the mu to
data List a = List (Maybe (a,List a))
and your definition
data List a = List a (Maybe (List a))
is just the unfolding of the Mu and elimination of the outer Maybe (corresponding to non-empty lists)
and you are done...
a couple of things
Using custom ADTs increases clarity and type safety
This universality is useful: see GHC.Generic
Okay, I said almost isomorphic. It is not exactly, namely
hmm = List (Just undefined)
has no equivalent value in the [a] = [] | (a:[a]) definition of lists. This is because Haskell data types are coinductive, and has been a point of criticism of the lazy evaluation model. You can get around these problems by only using strict sums and products (and call by value functions), and adding a special "Lazy" data constructor
data SPair a b = SPair !a !b
data SEither a b = SLeft !a | SRight !b
data Lazy a = Lazy a --Note, this has no obvious encoding in Pure CBV languages,
--although Laza a = (() -> a) is semantically correct,
--it is strictly less efficient than Haskell's CB-Need
and then all the isomorphisms can be faithfully encoded.
You can define lists in a bunch of ways in Haskell. For example, as functions:
{-# LANGUAGE RankNTypes #-}
newtype List a = List { runList :: forall b. (a -> b -> b) -> b -> b }
nil :: List a
nil = List (\_ z -> z )
cons :: a -> List a -> List a
cons x xs = List (\f z -> f x (runList xs f z))
isNil :: List a -> Bool
isNil xs = runList xs (\x xs -> False) True
head :: List a -> a
head xs = runList xs (\x xs -> x) (error "empty list")
tail :: List a -> List a
tail xs | isNil xs = error "empty list"
tail xs = fst (runList xs go (nil, nil))
where go x (xs, xs') = (xs', cons x xs)
foldr :: (a -> b -> b) -> b -> List a -> b
foldr f z xs = runList xs f z
The trick to this implementation is that lists are being represented as functions that execute a fold over the elements of the list:
fromNative :: [a] -> List a
fromNative xs = List (\f z -> foldr f z xs)
toNative :: List a -> [a]
toNative xs = runList xs (:) []
In any case, what really matters is the contract (or laws) that the type and its operations follow, and the performance of implementation. Basically, any implementation that fulfills the contract will give you correct programs, and faster implementations will give you faster programs.
What is the contract of lists? Well, I'm not going to express it in complete detail, but lists obey statements like these:
head (x:xs) == x
tail (x:xs) == xs
[] == []
[] /= x:xs
If xs == ys and x == y, then x:xs == y:ys
foldr f z [] == z
foldr f z (x:xs) == f x (foldr f z xs)
EDIT: And to tie this to augustss' answer:
newtype ExpList a = ExpList (Maybe (a, ExpList a))
toExpList :: List a -> ExpList a
toExpList xs = runList xs (\x xs -> ExpList (Just (x, xs))) (ExpList Nothing)
foldExpList f z (ExpList Nothing) = z
foldExpList f z (ExpList (Just (head, taill))) = f head (foldExpList f z tail)
fromExpList :: ExpList a -> List a
fromExpList xs = List (\f z -> foldExpList f z xs)
You could define lists in terms of Maybe, but not that way do. Your List type cannot be empty. Or did you intend Maybe (List a) to be the replacement of [a]. This seems bad since it doesn't distinguish the list and maybe types.
This would work
newtype List a = List (Maybe (a, List a))
This has some problems. First using this would be more verbose than usual lists, and second, the domain is not isomorphic to lists since we got a pair in there (which can be undefined; adding an extra level in the domain).
If it's a list, it should be an instance of Functor, right?
instance Functor List
where fmap f (List a as) = List (f a) (mapMaybeList f as)
mapMaybeList :: (a -> b) -> Maybe (List a) -> Maybe (List b)
mapMaybeList f as = fmap (fmap f) as
Here's a problem: you can make List an instance of Functor, but your Maybe List is not: even if Maybe was not already an instance of Functor in its own right, you can't directly make a construction like Maybe . List into an instance of anything (you'd need a wrapper type).
Similarly for other typeclasses.
Having said that, with your formulation you can do this, which you can't do with standard Haskell lists:
instance Comonad List
where extract (List a _) = a
duplicate x # (List _ y) = List x (duplicate y)
A Maybe List still wouldn't be comonadic though.
When I first started using Haskell, I too tried to represent things in existing types as much as I could on the grounds that it's good to avoid redundancy. My current understanding (moving target!) tends to involve more the idea of a multidimensional web of trade-offs. I won't be giving any “answer” here so much as pasting examples and asking “do you see what I mean?” I hope it helps anyway.
Let's have a look at a bit of Darcs code:
data UseCache = YesUseCache | NoUseCache
deriving ( Eq )
data DryRun = YesDryRun | NoDryRun
deriving ( Eq )
data Compression = NoCompression
| GzipCompression
deriving ( Eq )
Did you notice that these three types could all have been Bool's? Why do you think the Darcs hackers decided that they should introduce this sort of redundancy in their code? As another example, here is a piece of code we changed a few years back:
type Slot = Maybe Bool -- OLD code
data Slot = InFirst | InMiddle | InLast -- newer code
Why do you think we decided that the second code was an improvement over the first?
Finally, here is a bit of code from some of my day job stuff. It uses the newtype syntax that augustss mentioned,
newtype Role = Role { fromRole :: Text }
deriving (Eq, Ord)
newtype KmClass = KmClass { fromKmClass :: Text }
deriving (Eq, Ord)
newtype Lemma = Lemma { fromLemma :: Text }
deriving (Eq, Ord)
Here you'll notice that I've done the curious thing of taking a perfectly good Text type and then wrapping it up into three different things. The three things don't have any new features compared to plain old Text. They're just there to be different. To be honest, I'm not entirely sure if it was a good idea for me to do this. I provisionally think it was because I manipulate lots of different bits and pieces of text for lots of reasons, but time will tell.
Can you see what I'm trying to get at?
Related
I want to concat in a list of tuples of elements and Strings, each char with the element.
For example: [(True, "xy"), (False, "abc")] -> [(True,’x’),(True,’y’),(False,’a’), (False,’b’),(False,’c’)]
I do have a solution, but im wondering if there is a better one:
concatsplit :: [(a,[b])] -> [(a,b)]
concatsplit a = concatMap (\(x,y)-> concatsplit' (x,y)) a
concatsplit' :: (a,[b]) -> [(a,b)]
concatsplit' y = map (\x -> ((fst y),x)) (snd y)
Why not a simple list comprehension?
List comprehensions can often do as much as higher order functions, and I think that if you don't need to change the data but only "unpack" it, they are pretty clear. Here's a working example:
concS :: [(a,[b])] -> [(a,b)]
concS ls = [(a,b) | (a,x) <- ls, b <- x]
The other answers show how to do this idiomatically from scratch, which I like a lot. It might also be interesting to show how you might polish what you've already got. Here it is again as a reminder:
concatsplit a = concatMap (\(x,y)-> concatsplit' (x,y)) a
concatsplit' y = map (\x -> ((fst y),x)) (snd y)
The first thing I'd consistently change is called "eta reduction", and it is when you turn something of the shape \x -> foo x into just foo. We can do this in the argument to concatMap to get
concatsplit a = concatMap concatsplit' a
and then again in the argument to concatsplit to get:
concatsplit = concatMap concatsplit'
Looking at concatsplit', the thing I like least is the use of fst and snd instead of pattern matching. With pattern matching, it looks like this:
concatsplit' (a,bs) = map (\x -> (a,x)) bs
If you really wanted to practice your eta reduction, you might notice that (,) can be applied prefix and change this to
concatsplit' (a,bs) = map (\x -> (,) a x) bs
= map ((,) a) bs
but I think I'm just as happy one way or the other. At this point, this definition is small enough that I'd be tempted to inline it into concatsplit itself, getting:
concatsplit = concatMap (\(a,bs) -> map ((,) a) bs)
This looks like a pretty good definition to me, and I'd stop there.
You might be bugged by the almost-eta-reduction here: it's almost the right shape to drop the bs. An advanced user might go on, noticing that:
uncurry (\a bs -> map ((,) a) bs) = \(a,bs) -> map ((,) a) bs
Hence with some eta reduction and other point-free techniques, we could transition this way:
concatsplit = concatMap (uncurry (\a bs -> map ((,) a) bs))
= concatMap (uncurry (\a -> map ((,) a)))
= concatMap (uncurry (map . (,)))
But, personally, I find this less readable than where I declared I would stop above, namely:
concatsplit = concatMap (\(a,bs) -> map ((,) a) bs)
sequenceA makes it really easy:
concatsplit = concatMap sequenceA
Or generalize it even further:
concatsplit = (>>= sequenceA)
Details:
sequenceA has type (Applicative f, Traversable t) => t (f a) -> f (t a). This means that if you have a type with a Traversable on the "outside" and an Applicative on the "inside", you can call sequenceA on it to turn it inside-out, so that the Applicative is on the "outside" and the Traversable is on the "inside".
In this case, (True, "xy") has type (Bool, [Char]), which desugars to (,) Bool ([] Char). There's an instance of Traversable for ((,) a), and there's an instance of Applicative for []. Thus, you can call sequenceA on it, and the result will be of type [] ((,) Bool Char), or [(Bool, Char)] with sugar.
As it turns out, not only does sequenceA have a useful type, but it does exactly the thing you need here (and turns out to be exactly equivalent to concatsplit').
concatMap has type Foldable t => (a -> [b]) -> t a -> [b]. (>>=) has type Monad m => m a -> (a -> m b) -> m b. When specialized to lists, these become the same, except with their arguments in the opposite order.
You can also use the monad instance for lists:
concatSplit l = l >>= \(a,x) -> x >>= \b -> return (a,b)
Which can be simplified to:
concatSplit l = l >>= \(a,x) -> map ((,) a) x
And reformatted to do notation
concatSplit l = do
(a,x) <- l
map ((,) a) x
The standard libraries include a function
unzip :: [(a, b)] -> ([a], [b])
The obvious way to define this is
unzip xs = (map fst xs, map snd xs)
However, this means traversing the list twice to construct the result. What I'm wondering is, is there some way to do this with only one traversal?
Appending to a list is expensive - O(n) in fact. But, as any newbie knows, we can make clever use of laziness and recursion to "append" to a list with a recursive call. Thus, zip may easily be implemented as
zip :: [a] -> [b] -> [(a, b)]
zip (a:as) (b:bs) = (a,b) : zip as bs
This trick appear to only work if you're returning one list, however. I can't see how to extend this to allow constructing the tails of multiple lists simultaneously without ending up duplicating the source traversal.
I always presumed that the unzip from the standard library manages to do this in a single traversal (that's kind of the whole point of implementing this otherwise trivial function in a library), but I don't actually know how it works.
Yes, it is possible:
unzip = foldr (\(a,b) ~(as,bs) -> (a:as,b:bs)) ([],[])
With explicit recursion, this would look thus:
unzip [] = ([], [])
unzip ((a,b):xs) = (a:as, b:bs)
where ( as, bs) = unzip xs
The reason that the standard library has the irrefutable pattern match ~(as, bs) is to allow it to work actually lazily:
Prelude> let unzip' = foldr (\(a,b) ~(as,bs) -> (a:as,b:bs)) ([],[])
Prelude> let unzip'' = foldr (\(a,b) (as,bs) -> (a:as,b:bs)) ([],[])
Prelude> head . fst $ unzip' [(n,n) | n<-[1..]]
1
Prelude> head . fst $ unzip'' [(n,n) | n<-[1..]]
*** Exception: stack overflow
The following ideas stem from The Beautiful Folding.
When you have two folding operations over a list, you can always perform them at once by folding with keeping both their states. Let's express this in Haskell. First we need to capture what is a folding operation:
{-# LANGUAGE ExistentialQuantification #-}
import Control.Applicative
data Foldr a b = forall r . Foldr (a -> r -> r) r (r -> b)
A folding operation has a folding function, a start value, and a function that produces a result from a final state. By using existential quantification we can hide the type of the state, which is necessary to combine folds with different states.
Applying a Foldr to a list is just the matter of calling foldr with the appropriate arguments:
fold :: Foldr a b -> [a] -> b
fold (Foldr f s g) = g . foldr f s
Naturally, Foldr is a functor, we can always append a function to the finalizing one:
instance Functor (Foldr a) where
fmap f (Foldr k s r) = Foldr k s (f . r)
More interestingly, it's also an Applicative functor. Implementing pure is easy, we just return a given value and don't fold anything. The most interesting part is <*>. It creates a new fold that keeps the states of both give folds and at the end, combines the results.
instance Applicative (Foldr a) where
pure x = Foldr (\_ _ -> ()) () (\_ -> x)
(Foldr f1 s1 r1) <*> (Foldr f2 s2 r2)
= Foldr foldPair (s1, s2) finishPair
where
foldPair a ~(x1, x2) = (f1 a x1, f2 a x2)
finishPair ~(x1, x2) = r1 x1 (r2 x2)
f *> g = g
f <* g = f
Notice (as in leftaroundabout's answer) that we have lazy pattern matches ~ on tuples. This ensures that <*> is sufficiently lazy.
Now we can express map as a Foldr:
fromMap :: (a -> b) -> Foldr a [b]
fromMap f = Foldr (\x xs -> f x : xs) [] id
With that, defining unzip becomes easy. We just combine two maps, one using fst and another using snd:
unzip' :: Foldr (a, b) ([a], [b])
unzip' = (,) <$> fromMap fst <*> fromMap snd
unzip :: [(a, b)] -> ([a], [b])
unzip = fold unzip'
We can verify that it processes an input only once (and lazily): Both
head . snd $ unzip (repeat (3,'a'))
head . fst $ unzip (repeat (3,'a'))
yield the correct result.
Say I have any list like this:
[4,5,6,7,1,2,3,4,5,6,1,2]
I need a Haskell function that will transform this list into a list of lists which are composed of the segments of the original list which form a series in ascending order. So the result should look like this:
[[4,5,6,7],[1,2,3,4,5,6],[1,2]]
Any suggestions?
You can do this by resorting to manual recursion, but I like to believe Haskell is a more evolved language. Let's see if we can develop a solution that uses existing recursion strategies. First some preliminaries.
{-# LANGUAGE NoMonomorphismRestriction #-}
-- because who wants to write type signatures, amirite?
import Data.List.Split -- from package split on Hackage
Step one is to observe that we want to split the list based on a criteria that looks at two elements of the list at once. So we'll need a new list with elements representing a "previous" and "next" value. There's a very standard trick for this:
previousAndNext xs = zip xs (drop 1 xs)
However, for our purposes, this won't quite work: this function always outputs a list that's shorter than the input, and we will always want a list of the same length as the input (and in particular we want some output even when the input is a list of length one). So we'll modify the standard trick just a bit with a "null terminator".
pan xs = zip xs (map Just (drop 1 xs) ++ [Nothing])
Now we're going to look through this list for places where the previous element is bigger than the next element (or the next element doesn't exist). Let's write a predicate that does that check.
bigger (x, y) = maybe False (x >) y
Now let's write the function that actually does the split. Our "delimiters" will be values that satisfy bigger; and we never want to throw them away, so let's keep them.
ascendingTuples = split . keepDelimsR $ whenElt bigger
The final step is just to throw together the bit that constructs the tuples, the bit that splits the tuples, and a last bit of munging to throw away the bits of the tuples we don't care about:
ascending = map (map fst) . ascendingTuples . pan
Let's try it out in ghci:
*Main> ascending [4,5,6,7,1,2,3,4,5,6,1,2]
[[4,5,6,7],[1,2,3,4,5,6],[1,2]]
*Main> ascending [7,6..1]
[[7],[6],[5],[4],[3],[2],[1]]
*Main> ascending []
[[]]
*Main> ascending [1]
[[1]]
P.S. In the current release of split, keepDelimsR is slightly stricter than it needs to be, and as a result ascending currently doesn't work with infinite lists. I've submitted a patch that makes it lazier, though.
ascend :: Ord a => [a] -> [[a]]
ascend xs = foldr f [] xs
where
f a [] = [[a]]
f a xs'#(y:ys) | a < head y = (a:y):ys
| otherwise = [a]:xs'
In ghci
*Main> ascend [4,5,6,7,1,2,3,4,5,6,1,2]
[[4,5,6,7],[1,2,3,4,5,6],[1,2]]
This problem is a natural fit for a paramorphism-based solution. Having (as defined in that post)
para :: (a -> [a] -> b -> b) -> b -> [a] -> b
foldr :: (a -> b -> b) -> b -> [a] -> b
para c n (x : xs) = c x xs (para c n xs)
foldr c n (x : xs) = c x (foldr c n xs)
para c n [] = n
foldr c n [] = n
we can write
partition_asc xs = para c [] xs where
c x (y:_) ~(a:b) | x<y = (x:a):b
c x _ r = [x]:r
Trivial, since the abstraction fits.
BTW they have two kinds of map in Common Lisp - mapcar
(processing elements of an input list one by one)
and maplist (processing "tails" of a list). With this idea we get
import Data.List (tails)
partition_asc2 xs = foldr c [] . init . tails $ xs where
c (x:y:_) ~(a:b) | x<y = (x:a):b
c (x:_) r = [x]:r
Lazy patterns in both versions make it work with infinite input lists
in a productive manner (as first shown in Daniel Fischer's answer).
update 2020-05-08: not so trivial after all. Both head . head . partition_asc $ [4] ++ undefined and the same for partition_asc2 fail with *** Exception: Prelude.undefined. The combining function g forces the next element y prematurely. It needs to be more carefully written to be productive right away before ever looking at the next element, as e.g. for the second version,
partition_asc2' xs = foldr c [] . init . tails $ xs where
c (x:ys) r#(~(a:b)) = (x:g):gs
where
(g,gs) | not (null ys)
&& x < head ys = (a,b)
| otherwise = ([],r)
(again, as first shown in Daniel's answer).
You can use a right fold to break up the list at down-steps:
foldr foo [] xs
where
foo x yss = (x:zs) : ws
where
(zs, ws) = case yss of
(ys#(y:_)) : rest
| x < y -> (ys,rest)
| otherwise -> ([],yss)
_ -> ([],[])
(It's a bit complicated in order to have the combining function lazy in the second argument, so that it works well for infinite lists too.)
One other way of approaching this task (which, in fact lays the fundamentals of a very efficient sorting algorithm) is using the Continuation Passing Style a.k.a CPS which, in this particular case applied to folding from right; foldr.
As is, this answer would only chunk up the ascending chunks however, it would be nice to chunk up the descending ones at the same time... preferably in reverse order all in O(n) which would leave us with only binary merging of the obtained chunks for a perfectly sorted output. Yet that's another answer for another question.
chunks :: Ord a => [a] -> [[a]]
chunks xs = foldr go return xs $ []
where
go :: Ord a => a -> ([a] -> [[a]]) -> ([a] -> [[a]])
go c f = \ps -> let (r:rs) = f [c]
in case ps of
[] -> r:rs
[p] -> if c > p then (p:r):rs else [p]:(r:rs)
*Main> chunks [4,5,6,7,1,2,3,4,5,6,1,2]
[[4,5,6,7],[1,2,3,4,5,6],[1,2]]
*Main> chunks [4,5,6,7,1,2,3,4,5,4,3,2,6,1,2]
[[4,5,6,7],[1,2,3,4,5],[4],[3],[2,6],[1,2]]
In the above code c stands for current and p is for previous and again, remember we are folding from right so previous, is actually the next item to process.
I have to write a function that flattens a list of lists.
For example flatten [] = [] or flatten [1,2,3,4] = [1,2,3,4] or flatten [[1,2],[3],4,5]] = [1,2,3,4,5]
I'm having trouble with the being able to match the type depending on what is given to the flatten function.
Here's what I have:
data A a = B a | C [a] deriving (Show, Eq, Ord)
flatten::(Show a, Eq a, Ord a)=>A a -> A a
flatten (C []) = (C [])
flatten (C (x:xs) ) = (C flatten x) ++ (C flatten xs)
flatten (B a) = (C [a])
From what I can tell the issue is that the ++ operator is expecting a list for both of its arguments and I'm trying to give it something of type A. I've added the A type so the function can either get a single element or a list of elements.
Does anyone know a different way to do this differently, or explain what I can do to fix the type error?
It's a bit unclear what you are asking for, but flattening a list of list is a standard function called concat in the prelude with type signature [[a]] -> [a].
If you make a data type of nested lists as you have started above, maybe you want to adjust your data type to something like this:
data Lists a = List [a] | ListOfLists [Lists a]
Then you can flatten these to a list;
flatten :: Lists a -> [a]
flatten (List xs) = xs
flatten (ListOfLists xss) = concatMap flatten xss
As a test,
> flatten (ListOfLists [List [1,2],List [3],ListOfLists [List [4],List[5]]])
[1,2,3,4,5]
Firstly, the A type is on the right track but I don't think it's quite correct. You want it to be able to flatten arbitrarily nested lists, so a value of type "A a" should be able to contain values of type "A a":
data A a = B a | C [A a]
Secondly, the type of the function should be slightly different. Instead of returning a value of type "A a", you probably want it to return just a list of a, since by definition the function is always returning a flat list. So the type signature is thus:
flatten :: A a -> [a]
Also note that no typeclass constraints are necessary -- this function is completely generic since it does not look at the list's elements at all.
Here's my implementation:
flatten (B a) = [a]
flatten (C []) = []
flatten (C (x:xs)) = flatten x ++ flatten (C xs)
this one liner will do the job. Although as it was mentioned by Malin the type signature is different:
flatten :: [[a]] -> [a]
flatten xs = (\z n -> foldr (\x y -> foldr z y x) n xs) (:) []
simple test
frege> li = [[3,4,2],[1,9,9],[5,8]]
frege> flatten li
[3,4,2,1,9,9,5,8]
Flatten via list comprehension.
flatten arr = [y | x<- arr, y <- x]
I'm trying to write a function that takes a predicate f and a list and returns a list consisting of all items that satisfy f with preserved order. The trick is to do this using only higher order functions (HoF), no recursion, no comprehensions, and of course no filter.
You can express filter in terms of foldr:
filter p = foldr (\x xs-> if p x then x:xs else xs) []
I think you can use map this way:
filter' :: (a -> Bool) -> [a] -> [a]
filter' p xs = concat (map (\x -> if (p x) then [x] else []) xs)
You see? Convert the list in a list of lists, where if the element you want doesn't pass p, it turns to an empty list
filter' (> 1) [1 , 2, 3 ] would be: concat [ [], [2], [3]] = [2,3]
In prelude there is concatMap that makes the code simplier :P
the code should look like:
filter' :: (a -> Bool) -> [a] -> [a]
filter' p xs = concatMap (\x -> if (p x) then [x] else []) xs
using foldr, as suggested by sclv, can be done with something like this:
filter'' :: (a -> Bool) -> [a] -> [a]
filter'' p xs = foldr (\x y -> if p x then (x:y) else y) [] xs
You're obviously doing this to learn, so let me show you something cool. First up, to refresh our minds, the type of filter is:
filter :: (a -> Bool) -> [a] -> [a]
The interesting part of this is the last bit [a] -> [a]. It breaks down one list and it builds up a new list.
Recursive patterns are so common in Haskell (and other functional languages) that people have come up with names for some of these patterns. The simplest are the catamorphism and it's dual the anamorphism. I'll show you how this relates to your immediate problem at the end.
Fixed points
Prerequisite knowledge FTW!
What is the type of Nothing? Firing up GHCI, it says Nothing :: Maybe a and I wouldn't disagree. What about Just Nothing? Using GHCI again, it says Just Nothing :: Maybe (Maybe a) which is also perfectly valid, but what about the value that this a Nothing embedded within an arbitrary number, or even an infinite number, of Justs. ie, what is the type of this value:
foo = Just foo
Haskell doesn't actually allow such a definition, but with a slight tweak we can make such a type:
data Fix a = In { out :: a (Fix a) }
just :: Fix Maybe -> Fix Maybe
just = In . Just
nothing :: Fix Maybe
nothing = In Nothing
foo :: Fix Maybe
foo = just foo
Wooh, close enough! Using the same type, we can create arbitrarily nested nothings:
bar :: Fix Maybe
bar = just (just (just (just nothing)))
Aside: Peano arithmetic anyone?
fromInt :: Int -> Fix Maybe
fromInt 0 = nothing
fromInt n = just $ fromInt (n - 1)
toInt :: Fix Maybe -> Int
toInt (In Nothing) = 0
toInt (In (Just x)) = 1 + toInt x
This Fix Maybe type is a bit boring. Here's a type whose fixed-point is a list:
data L a r = Nil | Cons a r
type List a = Fix (L a)
This data type is going to be instrumental in demonstrating some recursion patterns.
Useful Fact: The r in Cons a r is called a recursion site
Catamorphism
A catamorphism is an operation that breaks a structure down. The catamorphism for lists is better known as a fold. Now the type of a catamorphism can be expressed like so:
cata :: (T a -> a) -> Fix T -> a
Which can be written equivalently as:
cata :: (T a -> a) -> (Fix T -> a)
Or in English as:
You give me a function that reduces a data type to a value and I'll give you a function that reduces it's fixed point to a value.
Actually, I lied, the type is really:
cata :: Functor T => (T a -> a) -> Fix T -> a
But the principle is the same. Notice, T is only parameterized over the type of the recursion sites, so the Functor part is really saying "Give me a way of manipulating all the recursion sites".
Then cata can be defined as:
cata f = f . fmap (cata f) . out
This is quite dense, let me elaborate. It's a three step process:
First, We're given a Fix t, which is a difficult type to play with, we can make it easier by applying out (from the definition of Fix) giving us a t (Fix t).
Next we want to convert the t (Fix t) into a t a, which we can do, via wishful thinking, using fmap (cata f); we're assuming we'll be able to construct cata.
Lastly, we have a t a and we want an a, so we just use f.
Earlier I said that the catamorphism for a list is called fold, but cata doesn't look much like a fold at the moment. Let's define a fold function in terms of cata.
Recapping, the list type is:
data L a r = Nil | Cons a r
type List a = Fix (L a)
This needs to be a functor to be useful, which is straight forward:
instance Functor (L a) where
fmap _ Nil = Nil
fmap f (Cons a r) = Cons a (f r)
So specializing cata we get:
cata :: (L x a -> a) -> List x -> a
We're practically there:
construct :: (a -> b -> b) -> b -> L a b -> b
construct _ x (In Nil) = x
construct f _ (In (Cons e n)) = f e n
fold :: (a -> b -> b) -> b -> List a -> b
fold f m = cata (construct f m)
OK, catamorphisms break data structures down one layer at a time.
Anamorphisms
Anamorphisms over lists are unfolds. Unfolds are less commonly known than there fold duals, they have a type like:
unfoldr :: (b -> Maybe (a, b)) -> b -> [a]
As you can see anamorphisms build up data structures. Here's the more general type:
ana :: Functor a => (a -> t a) -> a -> Fix t
This should immediately look quite familiar. The definition is also reminiscent of the catamorphism.
ana f = In . fmap (ana f) . f
It's just the same thing reversed. Constructing unfold from ana is even simpler than constructing fold from cata. Notice the structural similarity between Maybe (a, b) and L a b.
convert :: Maybe (a, b) -> L a b
convert Nothing = Nil
convert (Just (a, b)) = Cons a b
unfold :: (b -> Maybe (a, b)) -> b -> List a
unfold f = ana (convert . f)
Putting theory into practice
filter is an interesting function in that it can be constructed from a catamorphism or from an anamorphism. The other answers to this question (to date) have also used catamorphisms, but I'll define it both ways:
filter p = foldr (\x xs -> if p x then x:xs else xs) []
filter p =
unfoldr (f p)
where
f _ [] =
Nothing
f p (x:xs) =
if p x then
Just (x, xs)
else
f p xs
Yes, yes, I know I used a recursive definition in the unfold version, but forgive me, I taught you lots of theory and anyway filter isn't recursive.
I'd suggest you look at foldr.
Well, are ifs and empty list allowed?
filter = (\f -> (>>= (\x -> if (f x) then return x else [])))
For a list of Integers
filter2::(Int->Bool)->[Int]->[Int]
filter2 f []=[]
filter2 f (hd:tl) = if f hd then hd:filter2 f tl
else filter2 f tl
I couldn't resist answering this question in another way, this time with no recursion at all.
-- This is a type hack to allow the y combinator to be represented
newtype Mu a = Roll { unroll :: Mu a -> a }
-- This is the y combinator
fix f = (\x -> f ((unroll x) x))(Roll (\x -> f ((unroll x) x)))
filter :: (a -> Bool) -> [a] -> [a]
filter =
fix filter'
where
-- This is essentially a recursive definition of filter
-- except instead of calling itself, it calls f, a function that's passed in
filter' _ _ [] = []
filter' f p (x:xs) =
if p x then
(x:f p xs)
else
f p xs