I know what the following function does I would just like an explanation of how it works and the calculations that take place:
sponge :: Int -> [a] -> [a]
sponge 0 xs = xs
sponge n [] = []
sponge n (x:xs) = sponge (n-1) xs
I just seem to have lost the plot with it all now :(
Any help to get me back on track would be much appreciated! :)
It's a recursive function over two variables. You can break it apart line-by-line to understand it:
sponge :: Int -> [a] -> [a]
Two arguments, one an Int, one a list of some elements.
sponge 0 xs = xs
The base case. If the Int argument is zero, just return the list argument unmodified.
sponge n [] = []
Another base case, if the list is empty, immediately return the empty list.
sponge n (x:xs) = sponge (n-1) xs
Finally, the inductive step. If the list is non-empty (i.e. made up of at least one element and a tail, denoted by x:xs), then the result is sponge called on n-1 and the tail of the list.
So what will this function do? It will return the tail of the list after dropping n elements. It is the same as the drop function:
> drop 10 [1..20]
[11,12,13,14,15,16,17,18,19,20]
And
> sponge 10 [1..20]
[11,12,13,14,15,16,17,18,19,20]
In fact, we can ask QuickCheck to confirm:
> quickCheck $ \n xs -> sponge n xs == drop n xs
*** Failed! Falsifiable (after 7 tests and 5 shrinks):
-1
[()]
Ah! They're different. When n is negative! So we can modify the property relating the two functions:
> quickCheck $ \n xs -> n >= 0 ==> sponge n xs == drop n xs
+++ OK, passed 100 tests.
So your function behaves like drop, for cases when n is positive.
Here's a trace of the intermediate values of n and xs, obtained via the hood debugger:
It takes two parameters, as you can see: an Int and a list. It pattern-matches to distinguish three cases: 1) the Int is zero; 2) the list is empty; or, 3) the Int is not zero nor is the list empty.
In case 1 it returns the list; in case 2, it returns the empty list (which is what the second parameter was anyway); in case 3, it recursively calls itself with original Int parameter minus 1 and the original list minus its first element.
It looks a lot like "drop" from the Prelude.
Related
I'm taking a functional programming class and I'm having a hard time leaving the OOP mindset behind and finding answers to a lot of my questions.
I have to create a function that takes an ordered list and converts it into specified size sublists using a variation of fold.
This isn't right, but it's what I have:
splitList :: (Ord a) => Int -> [a] -> [[a]]
splitList size xs
| [condition] = foldr (\item subList -> item:subList) [] xs
| otherwise =
I've been searching and I found out that foldr is the variation that works better for what I want, and I think I've understood how fold works, I just don't know how I'll set up the guards so that when length sublist == size haskell resets the accumulator and goes on to the next list.
If I didn't explain myself correctly, here's the result I want:
> splitList 3 [1..10]
> [[1,2,3],[4,5,6],[7,8,9],[10]]
Thanks!
While Fabián's and chi's answers are entirely correct, there is actually an option to solve this puzzle using foldr. Consider the following code:
splitList :: Int -> [a] -> [[a]]
splitList n =
foldr (\el acc -> case acc of
[] -> [[el]]
(h : t) | length h < n -> (el : h) : t
_ -> [el] : acc
) []
The strategy here is to build up a list by extending its head as long as its length is lesser than desired. This solution has, however, two drawbacks:
It does something slightly different than in your example;
splitList 3 [1..10] produces [[1],[2,3,4],[5,6,7],[8,9,10]]
It's complexity is O(n * length l), as we measure length of up to n–sized list on each of the element which yields linear number of linear operations.
Let's first take care of first issue. In order to start counting at the beginning we need to traverse the list left–to–right, while foldr does it right–to–left. There is a common trick called "continuation passing" which will allow us to reverse the direction of the walk:
splitList :: Int -> [a] -> [[a]]
splitList n l = map reverse . reverse $
foldr (\el cont acc ->
case acc of
[] -> cont [[el]]
(h : t) | length h < n -> cont ((el : h) : t)
_ -> cont ([el] : acc)
) id l []
Here, instead of building the list in the accumulator we build up a function that will transform the list in the right direction. See this question for details. The side effect is reversing the list so we need to counter that by reverse application to the whole list and all of its elements. This goes linearly and tail-recursively tho.
Now let's work on the performance issue. The problem was that the length is linear on casual lists. There are two solutions for this:
Use another structure that caches length for a constant time access
Cache the value by ourselves
Because I guess it is a list exercise, let's go for the latter option:
splitList :: Int -> [a] -> [[a]]
splitList n l = map reverse . reverse . snd $
foldr (\el cont (countAcc, listAcc) ->
case listAcc of
[] -> cont (countAcc, [[el]])
(h : t) | countAcc < n -> cont (countAcc + 1, (el : h) : t)
(h : t) -> cont (1, [el] : (h : t))
) id l (1, [])
Here we extend our computational state with a counter that at each points stores the current length of the list. This gives us a constant check on each element and results in linear time complexity in the end.
A way to simplify this problem would be to split this into multiple functions. There are two things you need to do:
take n elements from the list, and
keep taking from the list as much as possible.
Lets try taking first:
taking :: Int -> [a] -> [a]
taking n [] = undefined
taking n (x:xs) = undefined
If there are no elemensts then we cannot take any more elements so we can only return an empty list, on the other hand if we do have an element then we can think of taking n (x:xs) as x : taking (n-1) xs, we would only need to check that n > 0.
taking n (x:xs)
| n > 0 = x :taking (n-1) xs
| otherwise = []
Now, we need to do that multiple times with the remainder so we should probably also return whatever remains from taking n elements from a list, in this case it would be whatever remains when n = 0 so we could try to adapt it to
| otherwise = ([], x:xs)
and then you would need to modify the type signature to return ([a], [a]) and the other 2 definitions to ensure you do return whatever remained after taking n.
With this approach your splitList would look like:
splitList n [] = []
splitList n l = chunk : splitList n remainder
where (chunk, remainder) = taking n l
Note however that folding would not be appropriate since it "flattens" whatever you are working on, for example given a [Int] you could fold to produce a sum which would be an Int. (foldr :: (a -> b -> b) -> b -> [a] -> b or "foldr function zero list produces an element of the function return type")
You want:
splitList 3 [1..10]
> [[1,2,3],[4,5,6],[7,8,9],[10]]
Since the "remainder" [10] in on the tail, I recommend you use foldl instead. E.g.
splitList :: (Ord a) => Int -> [a] -> [[a]]
splitList size xs
| size > 0 = foldl go [] xs
| otherwise = error "need a positive size"
where go acc x = ....
What should go do? Essentially, on your example, we must have:
splitList 3 [1..10]
= go (splitList 3 [1..9]) 10
= go [[1,2,3],[4,5,6],[7,8,9]] 10
= [[1,2,3],[4,5,6],[7,8,9],[10]]
splitList 3 [1..9]
= go (splitList 3 [1..8]) 9
= go [[1,2,3],[4,5,6],[7,8]] 9
= [[1,2,3],[4,5,6],[7,8,9]]
splitList 3 [1..8]
= go (splitList 3 [1..7]) 8
= go [[1,2,3],[4,5,6],[7]] 8
= [[1,2,3],[4,5,6],[7,8]]
and
splitList 3 [1]
= go [] 1
= [[1]]
Hence, go acc x should
check if acc is empty, if so, produce a singleton list [[x]].
otherwise, check the last list in acc:
if its length is less than size, append x
otherwise, append a new list [x] to acc
Try doing this by hand on your example to understand all the cases.
This will not be efficient, but it will work.
You don't really need the Ord a constraint.
Checking the accumulator's first sublist's length would lead to information flow from the right and the first chunk ending up the shorter one, potentially, instead of the last. Such function won't work on infinite lists either (not to mention the foldl-based variants).
A standard way to arrange for the information flow from the left with foldr is using an additional argument. The general scheme is
subLists n xs = foldr g z xs n
where
g x r i = cons x i (r (i-1))
....
The i argument to cons will guide its decision as to where to add the current element into. The i-1 decrements the counter on the way forward from the left, instead of on the way back from the right. z must have the same type as r and as the foldr itself as a whole, so,
z _ = [[]]
This means there must be a post-processing step, and some edge cases must be handled as well,
subLists n xs = post . foldr g z xs $ n
where
z _ = [[]]
g x r i | i == 1 = cons x i (r n)
g x r i = cons x i (r (i-1))
....
cons must be lazy enough not to force the results of the recursive call prematurely.
I leave it as an exercise finishing this up.
For a simpler version with a pre-processing step instead, see this recent answer of mine.
Just going to give another answer: this is quite similar to trying to write groupBy as a fold, and actually has a couple gotchas w.r.t. laziness that you have to bear in mind for an efficient and correct implementation. The following is the fastest version I found that maintains all the relevant laziness properties:
splitList :: Int -> [a] -> [[a]]
splitList m xs = snd (foldr f (const ([],[])) xs 1)
where
f x a i
| i <= 1 = let (ys,zs) = a m in ([], (x : ys) : zs)
| otherwise = let (ys,zs) = a (i-1) in (x : ys , zs)
The ys and the zs gotten from the recursive processing of the rest of list indicate the first and the rest of the groups into which the rest of the list will be broken up, by said recursive processing. So we either prepend the current element before that first subgroup if it is still shorter than needed, or we prepend before the first subgroup when it is just right and start a new, empty subgroup.
TL;DR: I want the exact behavior as filter ((== 4) . length) . subsequences. Just using subsequences also creates variable length of lists, which takes a lot of time to process. Since in the end only lists of length 4 are needed, I was thinking there must be a faster way.
I have a list of functions. The list has the type [Wor -> Wor]
The list looks something like this
[f1, f2, f3 .. fn]
What I want is a list of lists of n functions while preserving order like this
input : [f1, f2, f3 .. fn]
argument : 4 functions
output : A list of lists of 4 functions.
Expected output would be where if there's an f1 in the sublist, it'll always be at the head of the list.
If there's a f2 in the sublist and if the sublist doens't have f1, f2 would be at head. If fn is in the sublist, it'll be at last.
In general if there's a fx in the list, it never will be infront of f(x - 1) .
Basically preserving the main list's order when generating sublists.
It can be assumed that length of list will always be greater then given argument.
I'm just starting to learn Haskell so I haven't tried all that much but so far this is what I have tried is this:
Generation permutations with subsequences function and applying (filter (== 4) . length) on it seems to generate correct permutations -but it doesn't preserve order- (It preserves order, I was confusing it with my own function).
So what should I do?
Also if possible, is there a function or a combination of functions present in Hackage or Stackage which can do this? Because I would like to understand the source.
You describe a nondeterministic take:
ndtake :: Int -> [a] -> [[a]]
ndtake 0 _ = [[]]
ndtake n [] = []
ndtake n (x:xs) = map (x:) (ndtake (n-1) xs) ++ ndtake n xs
Either we take an x, and have n-1 more to take from xs; or we don't take the x and have n more elements to take from xs.
Running:
> ndtake 3 [1..4]
[[1,2,3],[1,2,4],[1,3,4],[2,3,4]]
Update: you wanted efficiency. If we're sure the input list is finite, we can aim at stopping as soon as possible:
ndetake n xs = go (length xs) n xs
where
go spare n _ | n > spare = []
go spare n xs | n == spare = [xs]
go spare 0 _ = [[]]
go spare n [] = []
go spare n (x:xs) = map (x:) (go (spare-1) (n-1) xs)
++ go (spare-1) n xs
Trying it:
> length $ ndetake 443 [1..444]
444
The former version seems to be stuck on this input, but the latter one returns immediately.
But, it measures the length of the whole list, and needlessly so, as pointed out by #dfeuer in the comments. We can achieve the same improvement in efficiency while retaining a bit more laziness:
ndzetake :: Int -> [a] -> [[a]]
ndzetake n xs | n > 0 =
go n (length (take n xs) == n) (drop n xs) xs
where
go n b p ~(x:xs)
| n == 0 = [[]]
| not b = []
| null p = [(x:xs)]
| otherwise = map (x:) (go (n-1) b p xs)
++ go n b (tail p) xs
Now the last test also works instantly with this code as well.
There's still room for improvement here. Just as with the library function subsequences, the search space could be explored even more lazily. Right now we have
> take 9 $ ndzetake 3 [1..]
[[1,2,3],[1,2,4],[1,2,5],[1,2,6],[1,2,7],[1,2,8],[1,2,9],[1,2,10],[1,2,11]]
but it could be finding [2,3,4] before forcing the 5 out of the input list. Shall we leave it as an exercise?
Here's the best I've been able to come up with. It answers the challenge Will Ness laid down to be as lazy as possible in the input. In particular, ndtake m ([1..n]++undefined) will produce as many entries as possible before throwing an exception. Furthermore, it strives to maximize sharing among the result lists (note the treatment of end in ndtakeEnding'). It avoids problems with badly balanced list appends using a difference list. This sequence-based version is considerably faster than any pure-list version I've come up with, but I haven't teased apart just why that is. I have the feeling it may be possible to do even better with a better understanding of just what's going on, but this seems to work pretty well.
Here's the general idea. Suppose we ask for ndtake 3 [1..5]. We first produce all the results ending in 3 (of which there is one). Then we produce all the results ending in 4. We do this by (essentially) calling ndtake 2 [1..3] and adding the 4 onto each result. We continue in this manner until we have no more elements.
import qualified Data.Sequence as S
import Data.Sequence (Seq, (|>))
import Data.Foldable (toList)
We will use the following simple utility function. It's almost the same as splitAtExactMay from the 'safe' package, but hopefully a bit easier to understand. For reasons I haven't investigated, letting this produce a result when its argument is negative leads to ndtake with a negative argument being equivalent to subsequences. If you want, you can easily change ndtake to do something else for negative arguments.
-- to return an empty list in the negative case.
splitAtMay :: Int -> [a] -> Maybe ([a], [a])
splitAtMay n xs
| n <= 0 = Just ([], xs)
splitAtMay _ [] = Nothing
splitAtMay n (x : xs) = flip fmap (splitAtMay (n - 1) xs) $
\(front, rear) -> (x : front, rear)
Now we really get started. ndtake is implemented using ndtakeEnding, which produces a sort of "difference list", allowing all the partial results to be concatenated cheaply.
ndtake :: Int -> [t] -> [[t]]
ndtake n xs = ndtakeEnding n xs []
ndtakeEnding :: Int -> [t] -> ([[t]] -> [[t]])
ndtakeEnding 0 _xs = ([]:)
ndtakeEnding n xs = case splitAtMay n xs of
Nothing -> id -- Not enough elements
Just (front, rear) ->
(front :) . go rear (S.fromList front)
where
-- For each element, produce a list of all combinations
-- *ending* with that element.
go [] _front = id
go (r : rs) front =
ndtakeEnding' [r] (n - 1) front
. go rs (front |> r)
ndtakeEnding doesn't call itself recursively. Rather, it calls ndtakeEnding' to calculate the combinations of the front part. ndtakeEnding' is very much like ndtakeEnding, but with a few differences:
We use a Seq rather than a list to represent the input sequence. This lets us split and snoc cheaply, but I'm not yet sure why that seems to give amortized performance that is so much better in this case.
We already know that the input sequence is long enough, so we don't need to check.
We're passed a tail (end) to add to each result. This lets us share tails when possible. There are lots of opportunities for sharing tails, so this can be expected to be a substantial optimization.
We use foldr rather than pattern matching. Doing this manually with pattern matching gives clearer code, but worse constant factors. That's because the :<|, and :|> patterns exported from Data.Sequence are non-trivial pattern synonyms that perform a bit of calculation, including amortized O(1) allocation, to build the tail or initial segment, whereas folds don't need to build those.
NB: this implementation of ndtakeEnding' works well for recent GHC and containers; it seems less efficient for earlier versions. That might be the work of Donnacha Kidney on foldr for Data.Sequence. In earlier versions, it might be more efficient to pattern match by hand, using viewl for versions that don't offer the pattern synonyms.
ndtakeEnding' :: [t] -> Int -> Seq t -> ([[t]] -> [[t]])
ndtakeEnding' end 0 _xs = (end:)
ndtakeEnding' end n xs = case S.splitAt n xs of
(front, rear) ->
((toList front ++ end) :) . go rear front
where
go = foldr go' (const id) where
go' r k !front = ndtakeEnding' (r : end) (n - 1) front . k (front |> r)
-- With patterns, a bit less efficiently:
-- go Empty _front = id
-- go (r :<| rs) !front =
-- ndtakeEnding' (r : end) (n - 1) front
-- . go rs (front :|> r)
I have been working with Haskell for a little over a week now so I am practicing some functions that might be useful for something. I want to compare two lists recursively. When the first list appears in the second list, I simply want to return the index at where the list starts to match. The index would begin at 0. Here is an example of what I want to execute for clarification:
subList [1,2,3] [4,4,1,2,3,5,6]
the result should be 2
I have attempted to code it:
subList :: [a] -> [a] -> a
subList [] = []
subList (x:xs) = x + 1 (subList xs)
subList xs = [ y:zs | (y,ys) <- select xs, zs <- subList ys]
where select [] = []
select (x:xs) = x
I am receiving an "error on input" and I cannot figure out why my syntax is not working. Any suggestions?
Let's first look at the function signature. You want to take in two lists whose contents can be compared for equality and return an index like so
subList :: Eq a => [a] -> [a] -> Int
So now we go through pattern matching on the arguments. First off, when the second list is empty then there is nothing we can do, so we'll return -1 as an error condition
subList _ [] = -1
Then we look at the recursive step
subList as xxs#(x:xs)
| all (uncurry (==)) $ zip as xxs = 0
| otherwise = 1 + subList as xs
You should be familiar with the guard syntax I've used, although you may not be familiar with the # syntax. Essentially it means that xxs is just a sub-in for if we had used (x:xs).
You may not be familiar with all, uncurry, and possibly zip so let me elaborate on those more. zip has the function signature zip :: [a] -> [b] -> [(a,b)], so it takes two lists and pairs up their elements (and if one list is longer than the other, it just chops off the excess). uncurry is weird so lets just look at (uncurry (==)), its signature is (uncurry (==)) :: Eq a => (a, a) -> Bool, it essentially checks if both the first and second element in the pair are equal. Finally, all will walk over the list and see if the first and second of each pair is equal and return true if that is the case.
Here is an sample problem I'm working upon:
Example Input: test [4, 1, 5, 6] 6 returns 5
I'm solving this using this function:
test :: [Int] -> Int -> Int
test [] _ = 0
test (x:xs) time = if (time - x) < 0
then x
else test xs $ time - x
Any better way to solve this function (probably using any inbuilt higher order function) ?
How about
test xs time = maybe 0 id . fmap snd . find ((>time) . fst) $ zip sums xs
where sums = scanl1 (+) xs
or equivalently with that sugary list comprehension
test xs time = headDef 0 $ [v | (s, v) <- zip sums xs, s > time]
where sums = scanl1 (+) xs
headDef is provided by safe. It's trivial to implement (f _ (x:_) = x; f x _ = x) but the safe package has loads of useful functions like these so it's good to check out.
Which sums the list up to each point and finds the first occurence greater than time. scanl is a useful function that behaves like foldl but keeps intermediate results and zip zips two lists into a list of tuples. Then we just use fmap and maybe to manipulate the Maybe (Integer, Integer) to get our result.
This defaults to 0 like yours but I like the version that simply goes to Maybe Integer better from a user point of view, to get this simply remove the maybe 0 id.
You might like scanl and its close relative, scanl1. For example:
test_ xs time = [curr | (curr, tot) <- zip xs (scanl1 (+) xs), tot > time]
This finds all the places where the running sum is greater than time. Then you can pick the first one (or 0) like this:
safeHead def xs = head (xs ++ [def])
test xs time = safeHead 0 (test_ xs time)
This is verbose, and I don't necessarily recommend writing such a simple function like this (IMO the pattern matching & recursion is plenty clear). But, here's a pretty declarative pipeline:
import Control.Error
import Data.List
deadline :: (Num a, Ord a) => a -> [a] -> a
deadline time = fromMaybe 0 . findDeadline time
findDeadline :: (Num a, Ord a) => a -> [a] -> Maybe a
findDeadline time xs = decayWithDifferences time xs
>>= findIndex (< 0)
>>= atMay xs
decayWithDifferences :: Num b => b -> [b] -> Maybe [b]
decayWithDifferences time = tailMay . scanl (-) time
-- > deadline 6 [4, 1, 5, 6]
-- 5
This documents the code a bit and in principle lets you test a little better, though IMO these functions fit more-or-less into the 'obviously correct' category.
You can verify that it matches your implementation:
import Test.QuickCheck
prop_equality :: [Int] -> Int -> Bool
prop_equality time xs = test xs time == deadline time xs
-- > quickCheck prop_equality
-- +++ OK, passed 100 tests.
In this particular case zipping suggested by others in not quite necessary:
test xs time = head $ [y-x | (x:y:_) <- tails $ scanl1 (+) $ 0:xs, y > time]++[0]
Here scanl1 will produce a list of rolling sums of the list xs, starting it with 0. Therefore, tails will produce a list with at least one list having two elements for non-empty xs. Pattern-matching (x:y:_) extracts two elements from each tail of rolling sums, so in effect it enumerates pairs of neighbouring elements in the list of rolling sums. Filtering on the condition, we reconstruct a part of the list that starts with the first element that produces a rolling sum greater than time. Then use headDef 0 as suggested before, or append a [0], so that head always returns something.
If you want to retain readability, I would just stick with your current solution. It's easy to understand, and isn't doing anything wrong.
Just because you can make it into a one line scan map fold mutant doesn't mean that you should!
I have some code which is designed to replace a value in a list
replaceNth n newVal (x:xs)
| n == 0 = newVal:xs
| otherwise = x:replaceNth (n-1) newVal xs
For example, when I load the function into GHCI, I enter and get the following:
*Main> replaceNth 3 4 [3,3,3,3,3]
[3,3,3,4,3]
However I am trying to use this function for a multiple lists within a list and can't seem to do so (e.g.).
What I want is to get a result like this:
[[3,3,3,3,3],[3,3,3,**2**,3],[3,3,3,3,3]]
From this [[3,3,3,3,3],[3,3,3,3,3],[3,3,3,3,3]]
using something like the function above.
Your function is not general enough to handle the task you wish it to preform. In particular, you need to know what the replacement value will be before you call the function. To get this working you might either:
Select the nth list, compute the new list then use your function to put that replacement in the list of lists. OR (and better)
Make a more general function that instead of taking a new value takes a function from the old value to the new:
Example
replaceNth' :: Int -> (a -> a) -> [a] -> [a]
replaceNth' n f (x:xs)
| n == 0 = (f x):xs
| otherwise = x:replace (n-1) f xs
Now to solve you second problem:
let ls = [[3,3,3,3,3],[3,3,3,3,3],[3,3,3,3,3]]
in replaceNth' 1 (replaceNth' 3 (const 2)) ls
That is replace the second list with a list made by taking the fourth element of that list and replacing what ever it is with 2.
Make a function that applies a function to the nth element of a list instead. Then you can easily get what you want by composing that with itself and using const for the inner replacement.
perhaps this does what you want (applied to the list of lists):
replaceNth 1 (replaceNth 3 4 [3,3,3,3,3])
Using your existing definition:
ghci> let arg = [[3,3,3,3,3],[3,3,3,3,3],[3,3,3,3,3]]
ghci> replaceNth 1 (replaceNth 3 2 (arg !! 1)) arg
[[3,3,3,3,3],[3,3,3,2,3],[3,3,3,3,3]]
ghci>
To refactor it into a function:
replaceMthNth m n v arg = replaceNth m (replaceNth n v (arg !! m)) arg