Related
I'm taking a functional programming class and I'm having a hard time leaving the OOP mindset behind and finding answers to a lot of my questions.
I have to create a function that takes an ordered list and converts it into specified size sublists using a variation of fold.
This isn't right, but it's what I have:
splitList :: (Ord a) => Int -> [a] -> [[a]]
splitList size xs
| [condition] = foldr (\item subList -> item:subList) [] xs
| otherwise =
I've been searching and I found out that foldr is the variation that works better for what I want, and I think I've understood how fold works, I just don't know how I'll set up the guards so that when length sublist == size haskell resets the accumulator and goes on to the next list.
If I didn't explain myself correctly, here's the result I want:
> splitList 3 [1..10]
> [[1,2,3],[4,5,6],[7,8,9],[10]]
Thanks!
While Fabián's and chi's answers are entirely correct, there is actually an option to solve this puzzle using foldr. Consider the following code:
splitList :: Int -> [a] -> [[a]]
splitList n =
foldr (\el acc -> case acc of
[] -> [[el]]
(h : t) | length h < n -> (el : h) : t
_ -> [el] : acc
) []
The strategy here is to build up a list by extending its head as long as its length is lesser than desired. This solution has, however, two drawbacks:
It does something slightly different than in your example;
splitList 3 [1..10] produces [[1],[2,3,4],[5,6,7],[8,9,10]]
It's complexity is O(n * length l), as we measure length of up to n–sized list on each of the element which yields linear number of linear operations.
Let's first take care of first issue. In order to start counting at the beginning we need to traverse the list left–to–right, while foldr does it right–to–left. There is a common trick called "continuation passing" which will allow us to reverse the direction of the walk:
splitList :: Int -> [a] -> [[a]]
splitList n l = map reverse . reverse $
foldr (\el cont acc ->
case acc of
[] -> cont [[el]]
(h : t) | length h < n -> cont ((el : h) : t)
_ -> cont ([el] : acc)
) id l []
Here, instead of building the list in the accumulator we build up a function that will transform the list in the right direction. See this question for details. The side effect is reversing the list so we need to counter that by reverse application to the whole list and all of its elements. This goes linearly and tail-recursively tho.
Now let's work on the performance issue. The problem was that the length is linear on casual lists. There are two solutions for this:
Use another structure that caches length for a constant time access
Cache the value by ourselves
Because I guess it is a list exercise, let's go for the latter option:
splitList :: Int -> [a] -> [[a]]
splitList n l = map reverse . reverse . snd $
foldr (\el cont (countAcc, listAcc) ->
case listAcc of
[] -> cont (countAcc, [[el]])
(h : t) | countAcc < n -> cont (countAcc + 1, (el : h) : t)
(h : t) -> cont (1, [el] : (h : t))
) id l (1, [])
Here we extend our computational state with a counter that at each points stores the current length of the list. This gives us a constant check on each element and results in linear time complexity in the end.
A way to simplify this problem would be to split this into multiple functions. There are two things you need to do:
take n elements from the list, and
keep taking from the list as much as possible.
Lets try taking first:
taking :: Int -> [a] -> [a]
taking n [] = undefined
taking n (x:xs) = undefined
If there are no elemensts then we cannot take any more elements so we can only return an empty list, on the other hand if we do have an element then we can think of taking n (x:xs) as x : taking (n-1) xs, we would only need to check that n > 0.
taking n (x:xs)
| n > 0 = x :taking (n-1) xs
| otherwise = []
Now, we need to do that multiple times with the remainder so we should probably also return whatever remains from taking n elements from a list, in this case it would be whatever remains when n = 0 so we could try to adapt it to
| otherwise = ([], x:xs)
and then you would need to modify the type signature to return ([a], [a]) and the other 2 definitions to ensure you do return whatever remained after taking n.
With this approach your splitList would look like:
splitList n [] = []
splitList n l = chunk : splitList n remainder
where (chunk, remainder) = taking n l
Note however that folding would not be appropriate since it "flattens" whatever you are working on, for example given a [Int] you could fold to produce a sum which would be an Int. (foldr :: (a -> b -> b) -> b -> [a] -> b or "foldr function zero list produces an element of the function return type")
You want:
splitList 3 [1..10]
> [[1,2,3],[4,5,6],[7,8,9],[10]]
Since the "remainder" [10] in on the tail, I recommend you use foldl instead. E.g.
splitList :: (Ord a) => Int -> [a] -> [[a]]
splitList size xs
| size > 0 = foldl go [] xs
| otherwise = error "need a positive size"
where go acc x = ....
What should go do? Essentially, on your example, we must have:
splitList 3 [1..10]
= go (splitList 3 [1..9]) 10
= go [[1,2,3],[4,5,6],[7,8,9]] 10
= [[1,2,3],[4,5,6],[7,8,9],[10]]
splitList 3 [1..9]
= go (splitList 3 [1..8]) 9
= go [[1,2,3],[4,5,6],[7,8]] 9
= [[1,2,3],[4,5,6],[7,8,9]]
splitList 3 [1..8]
= go (splitList 3 [1..7]) 8
= go [[1,2,3],[4,5,6],[7]] 8
= [[1,2,3],[4,5,6],[7,8]]
and
splitList 3 [1]
= go [] 1
= [[1]]
Hence, go acc x should
check if acc is empty, if so, produce a singleton list [[x]].
otherwise, check the last list in acc:
if its length is less than size, append x
otherwise, append a new list [x] to acc
Try doing this by hand on your example to understand all the cases.
This will not be efficient, but it will work.
You don't really need the Ord a constraint.
Checking the accumulator's first sublist's length would lead to information flow from the right and the first chunk ending up the shorter one, potentially, instead of the last. Such function won't work on infinite lists either (not to mention the foldl-based variants).
A standard way to arrange for the information flow from the left with foldr is using an additional argument. The general scheme is
subLists n xs = foldr g z xs n
where
g x r i = cons x i (r (i-1))
....
The i argument to cons will guide its decision as to where to add the current element into. The i-1 decrements the counter on the way forward from the left, instead of on the way back from the right. z must have the same type as r and as the foldr itself as a whole, so,
z _ = [[]]
This means there must be a post-processing step, and some edge cases must be handled as well,
subLists n xs = post . foldr g z xs $ n
where
z _ = [[]]
g x r i | i == 1 = cons x i (r n)
g x r i = cons x i (r (i-1))
....
cons must be lazy enough not to force the results of the recursive call prematurely.
I leave it as an exercise finishing this up.
For a simpler version with a pre-processing step instead, see this recent answer of mine.
Just going to give another answer: this is quite similar to trying to write groupBy as a fold, and actually has a couple gotchas w.r.t. laziness that you have to bear in mind for an efficient and correct implementation. The following is the fastest version I found that maintains all the relevant laziness properties:
splitList :: Int -> [a] -> [[a]]
splitList m xs = snd (foldr f (const ([],[])) xs 1)
where
f x a i
| i <= 1 = let (ys,zs) = a m in ([], (x : ys) : zs)
| otherwise = let (ys,zs) = a (i-1) in (x : ys , zs)
The ys and the zs gotten from the recursive processing of the rest of list indicate the first and the rest of the groups into which the rest of the list will be broken up, by said recursive processing. So we either prepend the current element before that first subgroup if it is still shorter than needed, or we prepend before the first subgroup when it is just right and start a new, empty subgroup.
TL;DR: I want the exact behavior as filter ((== 4) . length) . subsequences. Just using subsequences also creates variable length of lists, which takes a lot of time to process. Since in the end only lists of length 4 are needed, I was thinking there must be a faster way.
I have a list of functions. The list has the type [Wor -> Wor]
The list looks something like this
[f1, f2, f3 .. fn]
What I want is a list of lists of n functions while preserving order like this
input : [f1, f2, f3 .. fn]
argument : 4 functions
output : A list of lists of 4 functions.
Expected output would be where if there's an f1 in the sublist, it'll always be at the head of the list.
If there's a f2 in the sublist and if the sublist doens't have f1, f2 would be at head. If fn is in the sublist, it'll be at last.
In general if there's a fx in the list, it never will be infront of f(x - 1) .
Basically preserving the main list's order when generating sublists.
It can be assumed that length of list will always be greater then given argument.
I'm just starting to learn Haskell so I haven't tried all that much but so far this is what I have tried is this:
Generation permutations with subsequences function and applying (filter (== 4) . length) on it seems to generate correct permutations -but it doesn't preserve order- (It preserves order, I was confusing it with my own function).
So what should I do?
Also if possible, is there a function or a combination of functions present in Hackage or Stackage which can do this? Because I would like to understand the source.
You describe a nondeterministic take:
ndtake :: Int -> [a] -> [[a]]
ndtake 0 _ = [[]]
ndtake n [] = []
ndtake n (x:xs) = map (x:) (ndtake (n-1) xs) ++ ndtake n xs
Either we take an x, and have n-1 more to take from xs; or we don't take the x and have n more elements to take from xs.
Running:
> ndtake 3 [1..4]
[[1,2,3],[1,2,4],[1,3,4],[2,3,4]]
Update: you wanted efficiency. If we're sure the input list is finite, we can aim at stopping as soon as possible:
ndetake n xs = go (length xs) n xs
where
go spare n _ | n > spare = []
go spare n xs | n == spare = [xs]
go spare 0 _ = [[]]
go spare n [] = []
go spare n (x:xs) = map (x:) (go (spare-1) (n-1) xs)
++ go (spare-1) n xs
Trying it:
> length $ ndetake 443 [1..444]
444
The former version seems to be stuck on this input, but the latter one returns immediately.
But, it measures the length of the whole list, and needlessly so, as pointed out by #dfeuer in the comments. We can achieve the same improvement in efficiency while retaining a bit more laziness:
ndzetake :: Int -> [a] -> [[a]]
ndzetake n xs | n > 0 =
go n (length (take n xs) == n) (drop n xs) xs
where
go n b p ~(x:xs)
| n == 0 = [[]]
| not b = []
| null p = [(x:xs)]
| otherwise = map (x:) (go (n-1) b p xs)
++ go n b (tail p) xs
Now the last test also works instantly with this code as well.
There's still room for improvement here. Just as with the library function subsequences, the search space could be explored even more lazily. Right now we have
> take 9 $ ndzetake 3 [1..]
[[1,2,3],[1,2,4],[1,2,5],[1,2,6],[1,2,7],[1,2,8],[1,2,9],[1,2,10],[1,2,11]]
but it could be finding [2,3,4] before forcing the 5 out of the input list. Shall we leave it as an exercise?
Here's the best I've been able to come up with. It answers the challenge Will Ness laid down to be as lazy as possible in the input. In particular, ndtake m ([1..n]++undefined) will produce as many entries as possible before throwing an exception. Furthermore, it strives to maximize sharing among the result lists (note the treatment of end in ndtakeEnding'). It avoids problems with badly balanced list appends using a difference list. This sequence-based version is considerably faster than any pure-list version I've come up with, but I haven't teased apart just why that is. I have the feeling it may be possible to do even better with a better understanding of just what's going on, but this seems to work pretty well.
Here's the general idea. Suppose we ask for ndtake 3 [1..5]. We first produce all the results ending in 3 (of which there is one). Then we produce all the results ending in 4. We do this by (essentially) calling ndtake 2 [1..3] and adding the 4 onto each result. We continue in this manner until we have no more elements.
import qualified Data.Sequence as S
import Data.Sequence (Seq, (|>))
import Data.Foldable (toList)
We will use the following simple utility function. It's almost the same as splitAtExactMay from the 'safe' package, but hopefully a bit easier to understand. For reasons I haven't investigated, letting this produce a result when its argument is negative leads to ndtake with a negative argument being equivalent to subsequences. If you want, you can easily change ndtake to do something else for negative arguments.
-- to return an empty list in the negative case.
splitAtMay :: Int -> [a] -> Maybe ([a], [a])
splitAtMay n xs
| n <= 0 = Just ([], xs)
splitAtMay _ [] = Nothing
splitAtMay n (x : xs) = flip fmap (splitAtMay (n - 1) xs) $
\(front, rear) -> (x : front, rear)
Now we really get started. ndtake is implemented using ndtakeEnding, which produces a sort of "difference list", allowing all the partial results to be concatenated cheaply.
ndtake :: Int -> [t] -> [[t]]
ndtake n xs = ndtakeEnding n xs []
ndtakeEnding :: Int -> [t] -> ([[t]] -> [[t]])
ndtakeEnding 0 _xs = ([]:)
ndtakeEnding n xs = case splitAtMay n xs of
Nothing -> id -- Not enough elements
Just (front, rear) ->
(front :) . go rear (S.fromList front)
where
-- For each element, produce a list of all combinations
-- *ending* with that element.
go [] _front = id
go (r : rs) front =
ndtakeEnding' [r] (n - 1) front
. go rs (front |> r)
ndtakeEnding doesn't call itself recursively. Rather, it calls ndtakeEnding' to calculate the combinations of the front part. ndtakeEnding' is very much like ndtakeEnding, but with a few differences:
We use a Seq rather than a list to represent the input sequence. This lets us split and snoc cheaply, but I'm not yet sure why that seems to give amortized performance that is so much better in this case.
We already know that the input sequence is long enough, so we don't need to check.
We're passed a tail (end) to add to each result. This lets us share tails when possible. There are lots of opportunities for sharing tails, so this can be expected to be a substantial optimization.
We use foldr rather than pattern matching. Doing this manually with pattern matching gives clearer code, but worse constant factors. That's because the :<|, and :|> patterns exported from Data.Sequence are non-trivial pattern synonyms that perform a bit of calculation, including amortized O(1) allocation, to build the tail or initial segment, whereas folds don't need to build those.
NB: this implementation of ndtakeEnding' works well for recent GHC and containers; it seems less efficient for earlier versions. That might be the work of Donnacha Kidney on foldr for Data.Sequence. In earlier versions, it might be more efficient to pattern match by hand, using viewl for versions that don't offer the pattern synonyms.
ndtakeEnding' :: [t] -> Int -> Seq t -> ([[t]] -> [[t]])
ndtakeEnding' end 0 _xs = (end:)
ndtakeEnding' end n xs = case S.splitAt n xs of
(front, rear) ->
((toList front ++ end) :) . go rear front
where
go = foldr go' (const id) where
go' r k !front = ndtakeEnding' (r : end) (n - 1) front . k (front |> r)
-- With patterns, a bit less efficiently:
-- go Empty _front = id
-- go (r :<| rs) !front =
-- ndtakeEnding' (r : end) (n - 1) front
-- . go rs (front :|> r)
I am new to OCaml and functional programming as a whole. I am working on a part of an assignment where I must simply return the first n elements of a list. I am not allowed to use List.Length.
I feel that what I have written is probably overly complicated for what I'm trying to accomplish. What my code attempts to do is concatenate the front of the list to the end until n is decremented to 1. At which point the head moves a further n-1 spots to that the tail of the list and then return the tail. Again, I realize that there is probably a much simpler way to do this, but I am stumped and probably showing my inability to grasp functional programming.
let rec take n l =
let stopNum = 0 - (n - 1) in
let rec subList n lst =
match lst with
| hd::tl -> if n = stopNum then (tl)
else if (0 - n) = 0 then (subList (n - 1 ) tl )
else subList (n - 1) (tl # [hd])
| [] -> [] ;;
My compiler tells me that I have a syntax error on the last line. I get the same result regardless of whether "| [] -> []" is the last line or the one above it. The syntax error does not exist when I take out the nested subList let. Clearly there is something about nested lets that I am just not understanding.
Thanks.
let rec firstk k xs = match xs with
| [] -> failwith "firstk"
| x::xs -> if k=1 then [x] else x::firstk (k-1) xs;;
You might have been looking for this one.
What you have to do here, is to iterate on your initial list l and then add elements of this list in an accumulator until n is 0.
let take n l =
let rec sub_list n accu l =
match l with
| [] -> accu (* here the list is now empty, return the partial result *)
| hd :: tl ->
if n = 0 then accu (* if you reach your limit, return your result *)
else (* make the call to the recursive sub_list function:
- decrement n,
- add hd to the accumulator,
- call with the rest of the list (tl)*)
in
sub_list n [] l
Since you're just starting with FP, I suggest you look for the simplest and most elegant solution. What you're looking for is a way to solve the problem for n by building it up from a solution for a smaller problem.
So the key question is: how could you produce the first n elements of your list if you already had a function that could produce the first (n - 1) elements of a list?
Then you need to solve the "base" cases, the cases that are so simple that the answer is obvious. For this problem I'd say there are two base cases: when n is 0, the answer is obvious; when the list is empty, the answer is obvious.
If you work this through you get a fairly elegant definition.
I'm looking for the best way to partition a list (or seq) so that groups have a given size.
for ex. let's say I want to group with size 2 (this could be any other number though):
let xs = [(a,b,c); (a,b,d); (y,z,y); (w,y,z); (n,y,z)]
let grouped = partitionBySize 2 input
// => [[(a,b,c);(a,b,d)]; [(y,z,y);(w,y,z)]; [(n,y,z)]]
The obvious way to implement partitionBySize would be by adding the position to every tuple in the input list so that it becomes
[(0,a,b,c), (1,a,b,d), (2,y,z,y), (3,w,y,z), (4,n,y,z)]
and then use GroupBy with
xs |> Seq.ofList |> Seq.GroupBy (function | (i,_,_,_) -> i - (i % n))
However this solution doesn't look very elegant to me.
Is there a better way to implement this function (maybe with a built-in function)?
This seems to be a repeating pattern that's not captured by any function in the F# core library. When solving similar problems earlier, I defined a function Seq.groupWhen (see F# snippets) that turns a sequence into groups. A new group is started when the predicate holds.
You could solve the problem using Seq.groupWhen similarly to Seq.group (by starting a new group at even index). Unlike with Seq.group, this is efficient, because Seq.groupWhen iterates over the input sequence just once:
[3;3;2;4;1;2;8]
|> Seq.mapi (fun i v -> i, v) // Add indices to the values (as first tuple element)
|> Seq.groupWhen (fun (i, v) -> i%2 = 0) // Start new group after every 2nd element
|> Seq.map (Seq.map snd) // Remove indices from the values
Implementing the function directly using recursion is probably easier - the solution from John does exactly what you need - but if you wanted to see a more general approach then Seq.groupWhen may be interesting.
List.chunkBySize (hat tip: Scott Wlaschin) is now available and does exactly what you're talking about. It appears to be new with F# 4.0.
let grouped = [1..10] |> List.chunkBySize 3
// val grouped : int list list =
// [[1; 2; 3]; [4; 5; 6]; [7; 8; 9]; [10]]
Seq.chunkBySize and Array.chunkBySize are also now available.
Here's a tail-recursive function that traverses the list once.
let chunksOf n items =
let rec loop i acc items =
seq {
match i, items, acc with
//exit if chunk size is zero or input list is empty
| _, [], [] | 0, _, [] -> ()
//counter=0 so yield group and continue looping
| 0, _, _::_ -> yield List.rev acc; yield! loop n [] items
//decrement counter, add head to group, and loop through tail
| _, h::t, _ -> yield! loop (i-1) (h::acc) t
//reached the end of input list, yield accumulated elements
//handles items.Length % n <> 0
| _, [], _ -> yield List.rev acc
}
loop n [] items
Usage
[1; 2; 3; 4; 5]
|> chunksOf 2
|> Seq.toList //[[1; 2]; [3; 4]; [5]]
I like the elegance of Tomas' approach, but I benchmarked both our functions using an input list of 10 million elements. This one clocked in at 9 secs vs 22 for his. Of course, as he admitted, the most efficient method would probably involve arrays/loops.
What about a recursive approach? - only requires a single pass
let rec partitionBySize length inp dummy =
match inp with
|h::t ->
if dummy |> List.length < length then
partitionBySize length t (h::dummy)
else dummy::(partitionBySize length t (h::[]))
|[] -> dummy::[]
Then invoke it with partitionBySize 2 xs []
let partitionBySize size xs =
let sq = ref (seq xs)
seq {
while (Seq.length !sq >= size) do
yield Seq.take size !sq
sq := Seq.skip size !sq
if not (Seq.isEmpty !sq) then yield !sq
}
// result to list, if you want
|> Seq.map (Seq.toList)
|> Seq.toList
UPDATE
let partitionBySize size (sq:seq<_>) =
seq {
let e = sq.GetEnumerator()
let empty = ref true;
while !empty do
yield seq { for i = 1 to size do
empty := e.MoveNext()
if !empty then yield e.Current
}
}
array slice version:
let partitionBySize size xs =
let xa = Array.ofList xs
let len = xa.Length
[
for i in 0..size..(len-1) do
yield ( if i + size >= len then xa.[i..] else xa.[i..(i+size-1)] ) |> Array.toList
]
Well, I was late for the party. The code below is a tail-recursive version using high-order functions on List:
let partitionBySize size xs =
let i = size - (List.length xs - 1) % size
let xss, _, _ =
List.foldBack( fun x (acc, ls, j) ->
if j = size then ((x::ls)::acc, [], 1)
else (acc, x::ls, j+1)
) xs ([], [], i)
xss
I did the same benchmark as Daniel did. This function is efficient while it is 2x faster than his approach on my machine. I also compared it with an array/loop version, they are comparable in terms of performance.
Moreover, unlike John's answer, this version preserves order of elements in inner lists.
I've just started learning about Functional Programming, using Haskel.
I'm slowly getting through Erik Meijer's lectures on Channel 9 (I've watched the first 4 so far) and in the 4th video Erik explains how tail works, and it fascinated me.
I've tried to write a function that returns the middle of a list (2 items for even lengths, 1 for odd) and I'd like to hear how others would implement it in
The least amount of Haskell code
The fastest Haskell code
If you could explain your choices I'd be very grateful.
My beginners code looks like this:
middle as | length as > 2 = middle (drop 2 (reverse as))
| otherwise = as
Just for your amusement, a solution from someone who doesn't speak Haskell:
Write a recursive function that takes two arguments, a1 and a2, and pass your list in as both of them. At each recursion, drop 2 from a2 and 1 from a1. If you're out of elements for a2, you'll be at the middle of a1. You can handle the case of just 1 element remaining in a2 to answer whether you need 1 or 2 elements for your "middle".
I don't make any performance claims, though it only processes the elements of the list once (my assumption is that computing length t is an O(N) operation, so I avoid it), but here's my solution:
mid [] = [] -- Base case: the list is empty ==> no midpt
mid t = m t t -- The 1st t is the slow ptr, the 2nd is fast
where m (x:_) [_] = [x] -- Base case: list tracked by the fast ptr has
-- exactly one item left ==> the first item
-- pointed to by the slow ptr is the midpt.
m (x:y:_) [_,_] = [x,y] -- Base case: list tracked by the fast ptr has
-- exactly two items left ==> the first two
-- items pointed to by the slow ptr are the
-- midpts
m (_:t) (_:_:u) = m t u -- Recursive step: advance slow ptr by 1, and
-- advance fast ptr by 2.
The idea is to have two "pointers" into the list, one that increments one step at each point in the recursion, and one that increments by two.
(which is essentially what Carl Smotricz suggested)
Two versions
Using pattern matching, tail and init:
middle :: [a] -> [a]
middle l#(_:_:_:_) = middle $ tail $ init l
middle l = l
Using length, take, signum, mod, drop and div:
middle :: [a] -> [a]
middle xs = take (signum ((l + 1) `mod` 2) + 1) $ drop ((l - 1) `div ` 2) xs
where l = length xs
The second one is basically a one-liner (but uses where for readability).
I've tried to write a function that returns the middle of a list (2 items for even lengths, 1 for odd) and I'd like to hear how others would implement it in
The right datastructure for the right problem. In this case, you've specified something that only makes sense on a finite list, right? There is no 'middle' to an infinite list. So just reading the description, we know that the default Haskell list may not be the best solution: we may be paying the price for the laziness even when we don't need it. Notice how many of the solutions have difficulty avoiding 2*O(n) or O(n). Singly-linked lazy lists just don't match a quasi-array-problem too well.
Fortunately, we do have a finite list in Haskell: it's called Data.Sequence.
Let's tackle it the most obvious way: 'index (length / 2)'.
Data.Seq.length is O(1) according to the docs. Data.Seq.index is O(log(min(i,n-i))) (where I think i=index, and n=length). Let's just call it O(log n). Pretty good!
And note that even if we don't start out with a Seq and have to convert a [a] into a Seq, we may still win. Data.Seq.fromList is O(n). So if our rival was a O(n)+O(n) solution like xs !! (length xs), a solution like
middle x = let x' = Seq.fromList x in Seq.index(Seq.length x' `div` 2)
will be better since it would be O(1) + O(log n) + O(n), which simplifies to O(log n) + O(n), obviously better than O(n)+O(n).
(I leave as an exercise to the reader modifying middle to return 2 items if length be even and 1 if length be odd. And no doubt one could do better with an array with constant-time length and indexing operations, but an array isn't a list, I feel.)
Haskell solution inspired by Carl's answer.
middle = m =<< drop 1
where m [] = take 1
m [_] = take 2
m (_:_:ys) = m ys . drop 1
If the sequence is a linked list, traversal of this list is the dominating factor of efficiency. Since we need to know the overall length, we have to traverse the list at least once. There are two equivalent ways to get the middle elements:
Traverse the list once to get the length, then traverse it half to get at the middle elements.
Traverse the list in double steps and single steps at the same time, so that when the first traversal stops, the second traversal is in the middle.
Both need the same number of steps. The second is needlessly complicated, in my opinion.
In Haskell, it might be something like this:
middle xs = take (2 - r) $ drop ((div l 2) + r - 1) xs
where l = length xs
r = rem l 2
middle xs =
let (ms, len) = go xs 0 [] len
in ms
go (x:xs) i acc len =
let acc_ = case len `divMod` 2 of
(m, 0) -> if m == (i+1) then (take 2 (x:xs))
else acc
(m, 1) -> if m == i then [x]
else acc
in go xs (i+1) acc_ len
go [] i acc _ = (acc,i)
This solution traverses the list just once using lazy evaluation. While it traverses the list, it calculates the length and then backfeeds it to the function:
let (ms, len) = go xs 0 [] len
Now the middle elements can be calculated:
let acc' = case len `divMod` 2 of
...
F# solution based on Carl's answer:
let halve_list l =
let rec loop acc1 = function
| x::xs, [] -> List.rev acc1, x::xs
| x::xs, [y] -> List.rev (x::acc1), xs
| x::xs, y::y'::ys -> loop (x::acc1) (xs, ys)
| [], _ -> [], []
loop [] (l, l)
It's pretty easy to modify to get the median elements in the list too:
let median l =
let rec loop acc1 = function
| x::xs, [] -> [List.head acc1; x]
| x::xs, [y] -> [x]
| x::xs, y::y'::ys -> loop (x::acc1) (xs, ys)
| [], _ -> []
loop [] (l, l)
A more intuitive approach uses a counter:
let halve_list2 l =
let rec loop acc = function
| (_, []) -> [], []
| (0, rest) -> List.rev acc, rest
| (n, x::xs) -> loop (x::acc) (n - 1, xs)
let count = (List.length l) / 2
loop [] (count, l)
And a really ugly modification to get the median elements:
let median2 l =
let rec loop acc = function
| (n, [], isEven) -> []
| (0, rest, isEven) ->
match rest, isEven with
| x::xs, true -> [List.head acc; x]
| x::xs, false -> [x]
| _, _ -> failwith "Should never happen"
| (n, x::xs, isEven) -> loop (x::acc) (n - 1, xs, isEven)
let len = List.length l
let count = len / 2
let isEven = if len % 2 = 0 then true else false
loop [] (count, l, isEven)
Getting the length of a list requires traversing its entire contents at least once. Fortunately, it's perfectly easy to write your own list data structure which holds the length of the list in each node, allowing you get get the length in O(1).
Weird that this perfectly obvious formulation hasn't come up yet:
middle [] = []
middle [x] = [x]
middle [x,y] = [x,y]
middle xs = middle $ init $ tail xs
A very straightforward, yet unelegant and not so terse solution might be:
middle :: [a] -> Maybe [a]
middle xs
| len <= 2 = Nothing
| even len = Just $ take 2 . drop (half - 1) $ xs
| odd len = Just $ take 1 . drop (half) $ xs
where
len = length xs
half = len `div` 2
This iterates twice over the list.
mid xs = m where
l = length xs
m | l `elem` [0..2] = xs
m | odd l = drop (l `div` 2) $ take 1 $ xs
m | otherwise = drop (l `div` 2 - 1) $ take 2 $ xs
I live for one liners, although this example only works for odd lists. I just want to stretch my brain! Thank you for the fun =)
foo d = map (\(Just a) -> a) $ filter (/=Nothing) $ zipWith (\a b -> if a == b then Just a else Nothing) (Data.List.nub d) (Data.List.nub $ reverse d)
I'm not much of a haskeller myself but I tried this one.
First the tests (yes, you can do TDD using Haskell)
module Main
where
import Test.HUnit
import Middle
main = do runTestTT tests
tests = TestList [ test1
, test2
, test3
, test4
, test_final1
, test_final2
]
test1 = [0] ~=? middle [0]
test2 = [0, 1] ~=? middle [0, 1]
test3 = [1] ~=? middle [0, 1, 2]
test4 = [1, 2] ~=? middle [0, 1, 2, 3]
test_final1 = [3] ~=? middle [0, 1, 2, 3, 4, 5, 6]
test_final2 = [3, 4] ~=? middle [0, 1, 2, 3, 4, 5, 6, 7]
And the solution I came to:
module Middle
where
middle a = midlen a (length a)
midlen (a:xs) 1 = [a]
midlen (a:b:xs) 2 = [a, b]
midlen (a:xs) lg = midlen xs (lg - (2))
It will traverse list twice, once for getting length and a half more to get the middle, but I don't care it's still O(n) (and getting the middle of something implies to get it's length, so no reason to avoid it).
My solution, I like to keep things simple:
middle [] = []
middle xs | odd (length xs) = [xs !! ((length xs) `div` 2)]
| otherwise = [(xs !! ((length xs) `div` 2)),(reverse $ xs) !! ((length xs)`div` 2)]
Use of !! in Data.List as the function to get the value at a given index, which in this case is half the length of the list.
Edit: it actually works now
I like Svante's answer. My version:
> middle :: [a] -> [a]
> middle [] = []
> middle xs = take (r+1) . drop d $ xs
> where
> (d,r) = (length xs - 1) `divMod` 2
Here is my version. It was just a quick run up. I'm sure it's not very good.
middleList xs#(_:_:_:_) = take (if odd n then 1 else 2) $ drop en xs
where n = length xs
en = if n < 5 then 1 else 2 * (n `div` 4)
middleList xs = xs
I tried. :)
If anyone feels like commenting and telling me how awful or good this solution is, I would deeply appreciate it. I'm not very well versed in Haskell.
EDIT: Improved with suggestions from kmc on #haskell-blah
EDIT 2: Can now accept input lists with a length of less than 5.
Another one-line solution:
--
middle = ap (take . (1 +) . signum . (`mod` 2) . (1 +) . length) $ drop =<< (`div` 2) . subtract 1 . length
--