Better way to solve this [Int] -> Int -> Int - list

Here is an sample problem I'm working upon:
Example Input: test [4, 1, 5, 6] 6 returns 5
I'm solving this using this function:
test :: [Int] -> Int -> Int
test [] _ = 0
test (x:xs) time = if (time - x) < 0
then x
else test xs $ time - x
Any better way to solve this function (probably using any inbuilt higher order function) ?

How about
test xs time = maybe 0 id . fmap snd . find ((>time) . fst) $ zip sums xs
where sums = scanl1 (+) xs
or equivalently with that sugary list comprehension
test xs time = headDef 0 $ [v | (s, v) <- zip sums xs, s > time]
where sums = scanl1 (+) xs
headDef is provided by safe. It's trivial to implement (f _ (x:_) = x; f x _ = x) but the safe package has loads of useful functions like these so it's good to check out.
Which sums the list up to each point and finds the first occurence greater than time. scanl is a useful function that behaves like foldl but keeps intermediate results and zip zips two lists into a list of tuples. Then we just use fmap and maybe to manipulate the Maybe (Integer, Integer) to get our result.
This defaults to 0 like yours but I like the version that simply goes to Maybe Integer better from a user point of view, to get this simply remove the maybe 0 id.

You might like scanl and its close relative, scanl1. For example:
test_ xs time = [curr | (curr, tot) <- zip xs (scanl1 (+) xs), tot > time]
This finds all the places where the running sum is greater than time. Then you can pick the first one (or 0) like this:
safeHead def xs = head (xs ++ [def])
test xs time = safeHead 0 (test_ xs time)

This is verbose, and I don't necessarily recommend writing such a simple function like this (IMO the pattern matching & recursion is plenty clear). But, here's a pretty declarative pipeline:
import Control.Error
import Data.List
deadline :: (Num a, Ord a) => a -> [a] -> a
deadline time = fromMaybe 0 . findDeadline time
findDeadline :: (Num a, Ord a) => a -> [a] -> Maybe a
findDeadline time xs = decayWithDifferences time xs
>>= findIndex (< 0)
>>= atMay xs
decayWithDifferences :: Num b => b -> [b] -> Maybe [b]
decayWithDifferences time = tailMay . scanl (-) time
-- > deadline 6 [4, 1, 5, 6]
-- 5
This documents the code a bit and in principle lets you test a little better, though IMO these functions fit more-or-less into the 'obviously correct' category.
You can verify that it matches your implementation:
import Test.QuickCheck
prop_equality :: [Int] -> Int -> Bool
prop_equality time xs = test xs time == deadline time xs
-- > quickCheck prop_equality
-- +++ OK, passed 100 tests.

In this particular case zipping suggested by others in not quite necessary:
test xs time = head $ [y-x | (x:y:_) <- tails $ scanl1 (+) $ 0:xs, y > time]++[0]
Here scanl1 will produce a list of rolling sums of the list xs, starting it with 0. Therefore, tails will produce a list with at least one list having two elements for non-empty xs. Pattern-matching (x:y:_) extracts two elements from each tail of rolling sums, so in effect it enumerates pairs of neighbouring elements in the list of rolling sums. Filtering on the condition, we reconstruct a part of the list that starts with the first element that produces a rolling sum greater than time. Then use headDef 0 as suggested before, or append a [0], so that head always returns something.

If you want to retain readability, I would just stick with your current solution. It's easy to understand, and isn't doing anything wrong.
Just because you can make it into a one line scan map fold mutant doesn't mean that you should!

Related

Breaking a list into sublists of a specified size using foldr

I'm taking a functional programming class and I'm having a hard time leaving the OOP mindset behind and finding answers to a lot of my questions.
I have to create a function that takes an ordered list and converts it into specified size sublists using a variation of fold.
This isn't right, but it's what I have:
splitList :: (Ord a) => Int -> [a] -> [[a]]
splitList size xs
| [condition] = foldr (\item subList -> item:subList) [] xs
| otherwise =
I've been searching and I found out that foldr is the variation that works better for what I want, and I think I've understood how fold works, I just don't know how I'll set up the guards so that when length sublist == size haskell resets the accumulator and goes on to the next list.
If I didn't explain myself correctly, here's the result I want:
> splitList 3 [1..10]
> [[1,2,3],[4,5,6],[7,8,9],[10]]
Thanks!
While Fabián's and chi's answers are entirely correct, there is actually an option to solve this puzzle using foldr. Consider the following code:
splitList :: Int -> [a] -> [[a]]
splitList n =
foldr (\el acc -> case acc of
[] -> [[el]]
(h : t) | length h < n -> (el : h) : t
_ -> [el] : acc
) []
The strategy here is to build up a list by extending its head as long as its length is lesser than desired. This solution has, however, two drawbacks:
It does something slightly different than in your example;
splitList 3 [1..10] produces [[1],[2,3,4],[5,6,7],[8,9,10]]
It's complexity is O(n * length l), as we measure length of up to n–sized list on each of the element which yields linear number of linear operations.
Let's first take care of first issue. In order to start counting at the beginning we need to traverse the list left–to–right, while foldr does it right–to–left. There is a common trick called "continuation passing" which will allow us to reverse the direction of the walk:
splitList :: Int -> [a] -> [[a]]
splitList n l = map reverse . reverse $
foldr (\el cont acc ->
case acc of
[] -> cont [[el]]
(h : t) | length h < n -> cont ((el : h) : t)
_ -> cont ([el] : acc)
) id l []
Here, instead of building the list in the accumulator we build up a function that will transform the list in the right direction. See this question for details. The side effect is reversing the list so we need to counter that by reverse application to the whole list and all of its elements. This goes linearly and tail-recursively tho.
Now let's work on the performance issue. The problem was that the length is linear on casual lists. There are two solutions for this:
Use another structure that caches length for a constant time access
Cache the value by ourselves
Because I guess it is a list exercise, let's go for the latter option:
splitList :: Int -> [a] -> [[a]]
splitList n l = map reverse . reverse . snd $
foldr (\el cont (countAcc, listAcc) ->
case listAcc of
[] -> cont (countAcc, [[el]])
(h : t) | countAcc < n -> cont (countAcc + 1, (el : h) : t)
(h : t) -> cont (1, [el] : (h : t))
) id l (1, [])
Here we extend our computational state with a counter that at each points stores the current length of the list. This gives us a constant check on each element and results in linear time complexity in the end.
A way to simplify this problem would be to split this into multiple functions. There are two things you need to do:
take n elements from the list, and
keep taking from the list as much as possible.
Lets try taking first:
taking :: Int -> [a] -> [a]
taking n [] = undefined
taking n (x:xs) = undefined
If there are no elemensts then we cannot take any more elements so we can only return an empty list, on the other hand if we do have an element then we can think of taking n (x:xs) as x : taking (n-1) xs, we would only need to check that n > 0.
taking n (x:xs)
| n > 0 = x :taking (n-1) xs
| otherwise = []
Now, we need to do that multiple times with the remainder so we should probably also return whatever remains from taking n elements from a list, in this case it would be whatever remains when n = 0 so we could try to adapt it to
| otherwise = ([], x:xs)
and then you would need to modify the type signature to return ([a], [a]) and the other 2 definitions to ensure you do return whatever remained after taking n.
With this approach your splitList would look like:
splitList n [] = []
splitList n l = chunk : splitList n remainder
where (chunk, remainder) = taking n l
Note however that folding would not be appropriate since it "flattens" whatever you are working on, for example given a [Int] you could fold to produce a sum which would be an Int. (foldr :: (a -> b -> b) -> b -> [a] -> b or "foldr function zero list produces an element of the function return type")
You want:
splitList 3 [1..10]
> [[1,2,3],[4,5,6],[7,8,9],[10]]
Since the "remainder" [10] in on the tail, I recommend you use foldl instead. E.g.
splitList :: (Ord a) => Int -> [a] -> [[a]]
splitList size xs
| size > 0 = foldl go [] xs
| otherwise = error "need a positive size"
where go acc x = ....
What should go do? Essentially, on your example, we must have:
splitList 3 [1..10]
= go (splitList 3 [1..9]) 10
= go [[1,2,3],[4,5,6],[7,8,9]] 10
= [[1,2,3],[4,5,6],[7,8,9],[10]]
splitList 3 [1..9]
= go (splitList 3 [1..8]) 9
= go [[1,2,3],[4,5,6],[7,8]] 9
= [[1,2,3],[4,5,6],[7,8,9]]
splitList 3 [1..8]
= go (splitList 3 [1..7]) 8
= go [[1,2,3],[4,5,6],[7]] 8
= [[1,2,3],[4,5,6],[7,8]]
and
splitList 3 [1]
= go [] 1
= [[1]]
Hence, go acc x should
check if acc is empty, if so, produce a singleton list [[x]].
otherwise, check the last list in acc:
if its length is less than size, append x
otherwise, append a new list [x] to acc
Try doing this by hand on your example to understand all the cases.
This will not be efficient, but it will work.
You don't really need the Ord a constraint.
Checking the accumulator's first sublist's length would lead to information flow from the right and the first chunk ending up the shorter one, potentially, instead of the last. Such function won't work on infinite lists either (not to mention the foldl-based variants).
A standard way to arrange for the information flow from the left with foldr is using an additional argument. The general scheme is
subLists n xs = foldr g z xs n
where
g x r i = cons x i (r (i-1))
....
The i argument to cons will guide its decision as to where to add the current element into. The i-1 decrements the counter on the way forward from the left, instead of on the way back from the right. z must have the same type as r and as the foldr itself as a whole, so,
z _ = [[]]
This means there must be a post-processing step, and some edge cases must be handled as well,
subLists n xs = post . foldr g z xs $ n
where
z _ = [[]]
g x r i | i == 1 = cons x i (r n)
g x r i = cons x i (r (i-1))
....
cons must be lazy enough not to force the results of the recursive call prematurely.
I leave it as an exercise finishing this up.
For a simpler version with a pre-processing step instead, see this recent answer of mine.
Just going to give another answer: this is quite similar to trying to write groupBy as a fold, and actually has a couple gotchas w.r.t. laziness that you have to bear in mind for an efficient and correct implementation. The following is the fastest version I found that maintains all the relevant laziness properties:
splitList :: Int -> [a] -> [[a]]
splitList m xs = snd (foldr f (const ([],[])) xs 1)
where
f x a i
| i <= 1 = let (ys,zs) = a m in ([], (x : ys) : zs)
| otherwise = let (ys,zs) = a (i-1) in (x : ys , zs)
The ys and the zs gotten from the recursive processing of the rest of list indicate the first and the rest of the groups into which the rest of the list will be broken up, by said recursive processing. So we either prepend the current element before that first subgroup if it is still shorter than needed, or we prepend before the first subgroup when it is just right and start a new, empty subgroup.

A faster way of generating combinations with a given length, preserving the order

TL;DR: I want the exact behavior as filter ((== 4) . length) . subsequences. Just using subsequences also creates variable length of lists, which takes a lot of time to process. Since in the end only lists of length 4 are needed, I was thinking there must be a faster way.
I have a list of functions. The list has the type [Wor -> Wor]
The list looks something like this
[f1, f2, f3 .. fn]
What I want is a list of lists of n functions while preserving order like this
input : [f1, f2, f3 .. fn]
argument : 4 functions
output : A list of lists of 4 functions.
Expected output would be where if there's an f1 in the sublist, it'll always be at the head of the list.
If there's a f2 in the sublist and if the sublist doens't have f1, f2 would be at head. If fn is in the sublist, it'll be at last.
In general if there's a fx in the list, it never will be infront of f(x - 1) .
Basically preserving the main list's order when generating sublists.
It can be assumed that length of list will always be greater then given argument.
I'm just starting to learn Haskell so I haven't tried all that much but so far this is what I have tried is this:
Generation permutations with subsequences function and applying (filter (== 4) . length) on it seems to generate correct permutations -but it doesn't preserve order- (It preserves order, I was confusing it with my own function).
So what should I do?
Also if possible, is there a function or a combination of functions present in Hackage or Stackage which can do this? Because I would like to understand the source.
You describe a nondeterministic take:
ndtake :: Int -> [a] -> [[a]]
ndtake 0 _ = [[]]
ndtake n [] = []
ndtake n (x:xs) = map (x:) (ndtake (n-1) xs) ++ ndtake n xs
Either we take an x, and have n-1 more to take from xs; or we don't take the x and have n more elements to take from xs.
Running:
> ndtake 3 [1..4]
[[1,2,3],[1,2,4],[1,3,4],[2,3,4]]
Update: you wanted efficiency. If we're sure the input list is finite, we can aim at stopping as soon as possible:
ndetake n xs = go (length xs) n xs
where
go spare n _ | n > spare = []
go spare n xs | n == spare = [xs]
go spare 0 _ = [[]]
go spare n [] = []
go spare n (x:xs) = map (x:) (go (spare-1) (n-1) xs)
++ go (spare-1) n xs
Trying it:
> length $ ndetake 443 [1..444]
444
The former version seems to be stuck on this input, but the latter one returns immediately.
But, it measures the length of the whole list, and needlessly so, as pointed out by #dfeuer in the comments. We can achieve the same improvement in efficiency while retaining a bit more laziness:
ndzetake :: Int -> [a] -> [[a]]
ndzetake n xs | n > 0 =
go n (length (take n xs) == n) (drop n xs) xs
where
go n b p ~(x:xs)
| n == 0 = [[]]
| not b = []
| null p = [(x:xs)]
| otherwise = map (x:) (go (n-1) b p xs)
++ go n b (tail p) xs
Now the last test also works instantly with this code as well.
There's still room for improvement here. Just as with the library function subsequences, the search space could be explored even more lazily. Right now we have
> take 9 $ ndzetake 3 [1..]
[[1,2,3],[1,2,4],[1,2,5],[1,2,6],[1,2,7],[1,2,8],[1,2,9],[1,2,10],[1,2,11]]
but it could be finding [2,3,4] before forcing the 5 out of the input list. Shall we leave it as an exercise?
Here's the best I've been able to come up with. It answers the challenge Will Ness laid down to be as lazy as possible in the input. In particular, ndtake m ([1..n]++undefined) will produce as many entries as possible before throwing an exception. Furthermore, it strives to maximize sharing among the result lists (note the treatment of end in ndtakeEnding'). It avoids problems with badly balanced list appends using a difference list. This sequence-based version is considerably faster than any pure-list version I've come up with, but I haven't teased apart just why that is. I have the feeling it may be possible to do even better with a better understanding of just what's going on, but this seems to work pretty well.
Here's the general idea. Suppose we ask for ndtake 3 [1..5]. We first produce all the results ending in 3 (of which there is one). Then we produce all the results ending in 4. We do this by (essentially) calling ndtake 2 [1..3] and adding the 4 onto each result. We continue in this manner until we have no more elements.
import qualified Data.Sequence as S
import Data.Sequence (Seq, (|>))
import Data.Foldable (toList)
We will use the following simple utility function. It's almost the same as splitAtExactMay from the 'safe' package, but hopefully a bit easier to understand. For reasons I haven't investigated, letting this produce a result when its argument is negative leads to ndtake with a negative argument being equivalent to subsequences. If you want, you can easily change ndtake to do something else for negative arguments.
-- to return an empty list in the negative case.
splitAtMay :: Int -> [a] -> Maybe ([a], [a])
splitAtMay n xs
| n <= 0 = Just ([], xs)
splitAtMay _ [] = Nothing
splitAtMay n (x : xs) = flip fmap (splitAtMay (n - 1) xs) $
\(front, rear) -> (x : front, rear)
Now we really get started. ndtake is implemented using ndtakeEnding, which produces a sort of "difference list", allowing all the partial results to be concatenated cheaply.
ndtake :: Int -> [t] -> [[t]]
ndtake n xs = ndtakeEnding n xs []
ndtakeEnding :: Int -> [t] -> ([[t]] -> [[t]])
ndtakeEnding 0 _xs = ([]:)
ndtakeEnding n xs = case splitAtMay n xs of
Nothing -> id -- Not enough elements
Just (front, rear) ->
(front :) . go rear (S.fromList front)
where
-- For each element, produce a list of all combinations
-- *ending* with that element.
go [] _front = id
go (r : rs) front =
ndtakeEnding' [r] (n - 1) front
. go rs (front |> r)
ndtakeEnding doesn't call itself recursively. Rather, it calls ndtakeEnding' to calculate the combinations of the front part. ndtakeEnding' is very much like ndtakeEnding, but with a few differences:
We use a Seq rather than a list to represent the input sequence. This lets us split and snoc cheaply, but I'm not yet sure why that seems to give amortized performance that is so much better in this case.
We already know that the input sequence is long enough, so we don't need to check.
We're passed a tail (end) to add to each result. This lets us share tails when possible. There are lots of opportunities for sharing tails, so this can be expected to be a substantial optimization.
We use foldr rather than pattern matching. Doing this manually with pattern matching gives clearer code, but worse constant factors. That's because the :<|, and :|> patterns exported from Data.Sequence are non-trivial pattern synonyms that perform a bit of calculation, including amortized O(1) allocation, to build the tail or initial segment, whereas folds don't need to build those.
NB: this implementation of ndtakeEnding' works well for recent GHC and containers; it seems less efficient for earlier versions. That might be the work of Donnacha Kidney on foldr for Data.Sequence. In earlier versions, it might be more efficient to pattern match by hand, using viewl for versions that don't offer the pattern synonyms.
ndtakeEnding' :: [t] -> Int -> Seq t -> ([[t]] -> [[t]])
ndtakeEnding' end 0 _xs = (end:)
ndtakeEnding' end n xs = case S.splitAt n xs of
(front, rear) ->
((toList front ++ end) :) . go rear front
where
go = foldr go' (const id) where
go' r k !front = ndtakeEnding' (r : end) (n - 1) front . k (front |> r)
-- With patterns, a bit less efficiently:
-- go Empty _front = id
-- go (r :<| rs) !front =
-- ndtakeEnding' (r : end) (n - 1) front
-- . go rs (front :|> r)

Check if a list of lists has two or more identical elements

I need to write a function which checks if a list has two or more same elements and returns true or false.
For example [3,3,6,1] should return true, but [3,8] should return false.
Here is my code:
identical :: [Int] -> Bool
identical x = (\n-> filter (>= 2) n )( group x )
I know this is bad, and it does not work.
I wanted to group the list into list of lists, and if the length of a list is >= 2, then it is should return with true otherwise false.
Use any to get a Bool result.
any ( . . . ) ( group x )
Don’t forget to sort the list, group works on consecutive elements.
any ( . . . ) ( group ( sort x ) )
You can use (not . null . tail) for a predicate, as one of the options.
Just yesterday I posted a similar algorithm here. A possible way to go about it is,
generate the sequence of cumulative sets of elements
{}, {x0}, {x0,x1}, {x0,x1,x2} ...
pair the original sequence of elements with the cumulative sets
x0, x1 , x2 , x3 ...
{}, {x0}, {x0,x1}, {x0,x1,x2} ...
check repeated insertions, i.e.
xi such that xi ∈ {x0..xi-1}
This can be implemented for instance, via the functions below.
First we use scanl to iteratively add the elements of the list to a set, producing the cumulative sequence of these iterations.
sets :: [Int] -> [Set Int]
sets = scanl (\s x -> insert x s) empty
Then we zip the original list with this sequence, so each xi is paired with {x0...xi-1}.
elsets :: [Int] -> [(Int, Set Int)]
elsets xs = zip xs (sets xs)
Finally we use find to search for an element that is "about to be inserted" in a set which already contains it. The function find returns the pair element / set, and we pattern match to keep only the element, and return it.
result :: [Int] -> Maybe Int
result xs = do (x,_) <- find(\(y,s)->y `elem` s) (elsets xs)
return x
The another way to do that using Data.Map as below is not efficient than ..group . sort.. solution, it is still O(n log n) but able to work with infinite list.
import Data.Map.Lazy as Map (empty, lookup, insert)
identical :: [Int] -> Bool
identical = loop Map.empty
where loop _ [] = False
loop m (x:xs) = if Map.lookup x m == Nothing
then loop (insert x 0 m) xs
else True
OK basically this is one of the rare cases where you really need sort for efficiency. In fact Data.List.Unique package has a repeated function just for this job and if the source is checked one can see that sort and group strategy is chosen. I guess this is not the most efficient algorithm. I will come to how we can make sort even more efficient but for the time being let's enjoy a little since this is a nice question.
So we have the tails :: [a] -> [[a]] functions in Data.List package. Accordingly;
*Main> tails [3,3,6,1]
[[3,3,6,1],[3,6,1],[6,1],[1],[]]
As you may quickly notice we can zipWith the tail of tails list which is [[3,6,1],[6,1],[1],[]], with the given original list by applying a function to check if all item are different. This function could be a list comprehension or simply the all :: Foldable t => (a -> Bool) -> t a -> Bool function. The thing is, I would like to short circuit zipWith so that once i meet the first dupe let's just stop zipWith doing wasteful work by checking the rest. For this purpose i can use the monadic version of zipWith, namely zipWithM :: Applicative m => (a -> b -> m c) -> [a] -> [b] -> m [c] which lives in Control.Monad package. The reason being, from it's type signature we understand that it shall stop calculating any further when it accounts for a Nothing or Left whatever in the middle if my monad happens to be Maybe or Either.
Oh..! In Haskell I also love to use the bool :: a -> a -> Bool -> a function instead of if and then. bool is the ternary operation of Haskell which goes like
bool "work time" "coffee break" isCoffeeTime
The negative choice is on the left and the positive one is on the right where isCoffeeTime :: Bool is a function to return True if it is coffee time. Very composable as well.. so cool..!
So since we now have all the background knowledge we may proceed with the code
import Control.Monad (zipWithM)
import Data.List (tails)
import Data.Bool (bool)
anyDupe :: Eq a => [a] -> Either a [a]
anyDupe xs = zipWithM f xs ts
where ts = tail $ tails xs
f = \x t -> bool (Left x) (Right x) $ all (x /=) t
*Main> anyDupe [1,2,3,4,5]
Right [1,2,3,4,5] -- no dupes so we get the `Right` with the original list
*Main> anyDupe [3,3,6,1]
Left 3 -- here we have the first duplicate since zipWithM short circuits.
*Main> anyDupe $ 10^7:[1..10^7]
Left 10000000 -- wow zipWithM worked and returned reasonably fast.
But again.. as i said, this is still a naive approach because theoretically we are doing n(n+1)/2 operations. Yes zipWithM cuts redundancy down greatly if the first met dupe is close to the head but still this algorithm is O(n^2).
I believe it would be best to use the heavenly sort algorithm of Haskell (which is not merge sort as we know it by the way) in this particular case.
Now the algorithm award goes to -> drum roll here -> sort and fold -> applause. Sorry no grouping.
So now... once again we will use a monadic trick to utilize short circuits. We will use foldM :: (Foldable t, Monad m) => (b -> a -> m b) -> b -> t a -> m b. This, when used with Either monad also allows us to return a more meaningful result. OK lets do it. Any Left n means n is the first dupe and no more calculations while any Right _ means there are no dupes.
import Control.Monad (foldM)
import Data.List (sort)
import Data.Bool (bool)
anyDupe' :: (Eq a, Ord a, Enum a) => [a] -> Either a a
anyDupe' xs = foldM f i $ sort xs
where i = succ $ head xs -- prevent the initial value to be equal with the value at the head
f = \b a -> bool (Left a) (Right a) (a /= b)
*Main> anyDupe' [1,2,3,4,5]
Right 5
*Main> anyDupe' [3,3,6,1]
Left 3
*Main> anyDupe' $ 1:[10^7,(10^7-1)..1]
Left 1
(2.97 secs, 1,040,110,448 bytes)
*Main> anyDupe $ 1:[10^7,(10^7-1)..1]
Left 1
(2.94 secs, 1,440,112,888 bytes)
*Main> anyDupe' $ [1..10^7]++[10^7]
Left 10000000
(5.71 secs, 3,600,116,808 bytes) -- winner by far
*Main> anyDupe $ [1..10^7]++[10^7] -- don't try at home, it's waste of energy
In real world scenarios anyDupe' should always be the winner.

Haskell - split a list into two sublists with closest sums

I'm a Haskell beginner trying to learn more about the language by solving some online quizzes/problem sets.
The problem/question is quite lengthy but a part of it requires code that can find the number which divides a given list into two (nearly) equal (by sum) sub-lists.
Given [1..10]
Answer should be 7 since 1+2+..7 = 28 & 8+9+10 = 27
This is the way I implemented it
-- partitions list by y
partishner :: (Floating a) => Int -> [a] -> [[[a]]]
partishner 0 xs = [[xs],[]]
partishner y xs = [take y xs : [drop y xs]] ++ partishner (y - 1) xs
-- finds the equal sum
findTheEquilizer :: (Ord a, Floating a) => [a] -> [[a]]
findTheEquilizer xs = fst $ minimumBy (comparing snd) zipParty
where party = (tail . init) (partishner (length xs) xs) -- removes [xs,[]] types
afterParty = (map (\[x, y] -> (x - y) ** 2) . init . map (map sum)) party
zipParty = zip party afterParty -- zips partitions and squared diff betn their sums
Given (last . head) (findTheEquilizer [1..10])
output : 7
For numbers near 50k it works fine
λ> (last . head) (findTheEquilizer [1..10000])
7071.0
The trouble starts when I put in lists with any more than 70k elements in it. It takes forever to compute.
So what do I have to change in the code to make it run better or do I have to change my whole approach? I'm guessing it's the later, but I'm not sure how to go about do that.
It looks to me that the implementation is quite chaotic. For example partishner seems to construct a list of lists of lists of a, where, given I understood it correctly, the outer list contains lists with each two elements: the list of elements on "the left", and the list of elements at the "right". As a result, this will take O(n2) to construct the lists.
By using lists over 2-tuples, this is also quite "unsafe", since a list can - although here probably impossible - contain no elements, one element, or more than two elements. If you make a mistake in one of the functions, it will be hard to find out that mistake.
It looks to me that it might be easier to implement a "sweep algorithm": we first calculate the sum of all the elements in the list. This is the value on the "right" in case we decide to split at that specific point, next we start moving from left to right, each time subtracting the element from the sum on the right, and adding it to the sum on the left. We can each time evaluate the difference in score, like:
import Data.List(unfoldr)
sweep :: Num a => [a] -> [(Int, a, [a])]
sweep lst = x0 : unfoldr f x0
where x0 = (0, sum lst, lst)
f (_, _, []) = Nothing
f (i, r, (x: xs)) = Just (l, l)
where l = (i+1, r-2*x, xs)
For example:
Prelude Data.List> sweep [1,4,2,5]
[(0,12,[1,4,2,5]),(1,10,[4,2,5]),(2,2,[2,5]),(3,-2,[5]),(4,-12,[])]
So if we select to split at the first split point (before the first element), the sum on the right is 12 higher than the sum on the left, if we split after the first element, the sum on the right (11) is 10 higher than the sum on the left (1).
We can then obtain the minimum of these splits with minimumBy :: (a -> a -> Ordering) -> [a] -> a:
import Data.List(minimumBy)
import Data.Ord(comparing)
findTheEquilizer :: (Ord a, Num a) => [a] -> ([a], [a])
findTheEquilizer lst = (take idx lst, tl)
where (idx, _, tl) = minimumBy (comparing (abs . \(_, x, _) -> x)) (sweep lst)
We then obtain the correct value for [1..10]:
Prelude Data.List Data.Ord Data.List> findTheEquilizer [1..10]
([1,2,3,4,5,6,7],[8,9,10])
or for 70'000:
Prelude Data.List Data.Ord Data.List> head (snd (findTheEquilizer [1..70000]))
49498
The above is not ideal, it can be implemented more elegantly, but I leave this as an exercise.
Okay, firstly, let analyse why it run forever (...actually not forever, just slow), take a look of partishner function:
partishner y xs = [take y xs : [drop y xs]] ++ partishner (y - 1) xs
where take y xs and drop y xs are run linear time, i.e. O(N), and so as
[take y xs : [drop y xs]]
is O(N) too.
However, it is run again and again in recursive way over each element of given list. Now suppose the length of given list is M, each call of partishner function take O(N) times, to finish computation need:
O(1+2+...M) = (M(1+M)/2) ~ O(M^2)
Now, the list has 70k elements, it at least need 70k ^ 2 step. So why it hang.
Instead of using partishner function, you can sum the list in linear way as:
sumList::(Floating a)=>[a]->[a]
sumList xs = sum 0 xs
where sum _ [] = []
sum s (y:ys) = let s' = s + y in s' : sum s' ys
and findEqilizer just sum the given list from left to right (leftSum) and from right to left (rightSum) and take the result just as your original program, but the whole process just take linear time.
findEquilizer::(Ord a, Floating a) => [a] -> a
findEquilizer [] = 0
findEquilizer xs =
let leftSum = reverse $ 0:(sumList $ init xs)
rightSum = sumList $ reverse $ xs
afterParty = zipWith (\x y->(x-y) ** 2) leftSum rightSum
in fst $ minimumBy (comparing snd) (zip (reverse $ init xs) afterParty)
I assume that none of the list elements are negative, and use a "tortoise and hare" approach. The hare steps through the list, adding up elements. The tortoise does the same thing, but it keeps its sum doubled and it carefully ensures that it only takes a step when that step won't put it ahead of the hare.
approxEqualSums
:: (Num a, Ord a)
=> [a] -> (Maybe a, [a])
approxEqualSums as0 = stepHare 0 Nothing as0 0 as0
where
-- ht is the current best guess.
stepHare _tortoiseSum ht tortoise _hareSum []
= (ht, tortoise)
stepHare tortoiseSum ht tortoise hareSum (h:hs)
= stepTortoise tortoiseSum ht tortoise (hareSum + h) hs
stepTortoise tortoiseSum ht [] hareSum hare
= stepHare tortoiseSum ht [] hareSum hare
stepTortoise tortoiseSum ht tortoise#(t:ts) hareSum hare
| tortoiseSum' <= hareSum
= stepTortoise tortoiseSum' (Just t) ts hareSum hare
| otherwise
= stepHare tortoiseSum ht tortoise hareSum hare
where tortoiseSum' = tortoiseSum + 2*t
In use:
> approxEqualSums [1..10]
(Just 6,[7,8,9,10])
6 is the last element before going over half, and 7 is the first one after that.
I asked in the comment and OP says [1..n] is not really defining the question. Yes i guess what's asked is like [1 -> n] in random ascending sequence such as [1,3,7,19,37,...,1453,...,n].
Yet..! Even as per the given answers, for a list like [1..n] we really don't need to do any list operation at all.
The sum of [1..n] is n*(n+1)/2.
Which means we need to find m for n*(n+1)/4
Which means m(m+1)/2 = n*(n+1)/4.
So if n == 100 then m^2 + m - 5050 = 0
All we need is
formula where a = 1, b = 1 and c = -5050 yielding the reasonable root to be 70.565 ⇒ 71 (rounded). Lets check. 71*72/2 = 2556 and 5050-2556 = 2494 which says 2556 - 2494 = 62 minimal difference (<71). Yes we must split at 71. So just do like result = [[1..71],[72..100]] over..!
But when it comes to not subsequent ascending, that's a different animal. It has to be done by first finding the sum and then like binary search by jumping halfway the list and comparing the sums to decide whether to jump halfway back or forward accordingly. I will implement that one later.
Here's a code which is empirically behaving better than linear, and gets to the 2,000,000 in just over 1 second even when interpreted:
g :: (Ord c, Num c) => [c] -> [(Int, c)]
g = head . dropWhile ((> 0) . snd . last) . map (take 2) . tails . zip [1..]
. (\xs -> zipWith (-) (map (last xs -) xs) xs) . scanl1 (+)
g [1..10] ==> [(6,13),(7,-1)] -- 0.0s
g [1..70000] ==> [(49497,32494),(49498,-66502)] -- 0.09s
g [70000,70000-1..1] ==> [(20502,66502),(20503,-32494)] -- 0.09s
g [1..100000] ==> [(70710,75190),(70711,-66232)] -- 0.11s
g [1..1000000] ==> [(707106,897658),(707107,-516556)] -- 0.62s
g [1..2000000] ==> [(1414213,1176418),(1414214,-1652010)] -- 1.14s n^0.88
g [1..3000000] ==> [(2121320,836280),(2121321,-3406362)] -- 1.65s n^0.91
It works by running the partial sums with scanl1 (+) and taking the total sum as its last, so that for each partial sum, subtracting it from the total gives us the sum of the second part of the split.
The algorithm assumes all the numbers in the input list are strictly positive, so the partial sums list is monotonically increasing. Nothing else is assumed about the numbers.
The value must be chosen from the pair (the g's result) so that its second component's absolute value is the smaller between the two.
This is achieved by minimumBy (comparing (abs . snd)) . g.
clarifications: There's some confusion about "complexity" in the comments below, yet the answer says nothing at all about complexity but uses a specific empirical measurement. You can't argue with empirical data (unless you misinterpret its meaning).
The answer does not claim it "is better than linear", it says "it behaves better than linear" [in the tested range of problem sizes], which the empirical data incontrovertibly show.
Finally, an appeal to authority. Robert Sedgewick is an authority on algorithms. Take it up with him.
(and of course the algorithm handles unordered data as well as it does ordered).
As for the reasons for OP's code inefficiency: map sum . inits can't help being quadratic, but the equivalent scanl (+) 0 is linear. The radical improvement comes about from a lot of redundant calculations in the former being avoided in the latter. (Another example of this can be seen here.)

Need to partition a list into lists based on breaks in ascending order of elements (Haskell)

Say I have any list like this:
[4,5,6,7,1,2,3,4,5,6,1,2]
I need a Haskell function that will transform this list into a list of lists which are composed of the segments of the original list which form a series in ascending order. So the result should look like this:
[[4,5,6,7],[1,2,3,4,5,6],[1,2]]
Any suggestions?
You can do this by resorting to manual recursion, but I like to believe Haskell is a more evolved language. Let's see if we can develop a solution that uses existing recursion strategies. First some preliminaries.
{-# LANGUAGE NoMonomorphismRestriction #-}
-- because who wants to write type signatures, amirite?
import Data.List.Split -- from package split on Hackage
Step one is to observe that we want to split the list based on a criteria that looks at two elements of the list at once. So we'll need a new list with elements representing a "previous" and "next" value. There's a very standard trick for this:
previousAndNext xs = zip xs (drop 1 xs)
However, for our purposes, this won't quite work: this function always outputs a list that's shorter than the input, and we will always want a list of the same length as the input (and in particular we want some output even when the input is a list of length one). So we'll modify the standard trick just a bit with a "null terminator".
pan xs = zip xs (map Just (drop 1 xs) ++ [Nothing])
Now we're going to look through this list for places where the previous element is bigger than the next element (or the next element doesn't exist). Let's write a predicate that does that check.
bigger (x, y) = maybe False (x >) y
Now let's write the function that actually does the split. Our "delimiters" will be values that satisfy bigger; and we never want to throw them away, so let's keep them.
ascendingTuples = split . keepDelimsR $ whenElt bigger
The final step is just to throw together the bit that constructs the tuples, the bit that splits the tuples, and a last bit of munging to throw away the bits of the tuples we don't care about:
ascending = map (map fst) . ascendingTuples . pan
Let's try it out in ghci:
*Main> ascending [4,5,6,7,1,2,3,4,5,6,1,2]
[[4,5,6,7],[1,2,3,4,5,6],[1,2]]
*Main> ascending [7,6..1]
[[7],[6],[5],[4],[3],[2],[1]]
*Main> ascending []
[[]]
*Main> ascending [1]
[[1]]
P.S. In the current release of split, keepDelimsR is slightly stricter than it needs to be, and as a result ascending currently doesn't work with infinite lists. I've submitted a patch that makes it lazier, though.
ascend :: Ord a => [a] -> [[a]]
ascend xs = foldr f [] xs
where
f a [] = [[a]]
f a xs'#(y:ys) | a < head y = (a:y):ys
| otherwise = [a]:xs'
In ghci
*Main> ascend [4,5,6,7,1,2,3,4,5,6,1,2]
[[4,5,6,7],[1,2,3,4,5,6],[1,2]]
This problem is a natural fit for a paramorphism-based solution. Having (as defined in that post)
para :: (a -> [a] -> b -> b) -> b -> [a] -> b
foldr :: (a -> b -> b) -> b -> [a] -> b
para c n (x : xs) = c x xs (para c n xs)
foldr c n (x : xs) = c x (foldr c n xs)
para c n [] = n
foldr c n [] = n
we can write
partition_asc xs = para c [] xs where
c x (y:_) ~(a:b) | x<y = (x:a):b
c x _ r = [x]:r
Trivial, since the abstraction fits.
BTW they have two kinds of map in Common Lisp - mapcar
(processing elements of an input list one by one)
and maplist (processing "tails" of a list). With this idea we get
import Data.List (tails)
partition_asc2 xs = foldr c [] . init . tails $ xs where
c (x:y:_) ~(a:b) | x<y = (x:a):b
c (x:_) r = [x]:r
Lazy patterns in both versions make it work with infinite input lists
in a productive manner (as first shown in Daniel Fischer's answer).
update 2020-05-08: not so trivial after all. Both head . head . partition_asc $ [4] ++ undefined and the same for partition_asc2 fail with *** Exception: Prelude.undefined. The combining function g forces the next element y prematurely. It needs to be more carefully written to be productive right away before ever looking at the next element, as e.g. for the second version,
partition_asc2' xs = foldr c [] . init . tails $ xs where
c (x:ys) r#(~(a:b)) = (x:g):gs
where
(g,gs) | not (null ys)
&& x < head ys = (a,b)
| otherwise = ([],r)
(again, as first shown in Daniel's answer).
You can use a right fold to break up the list at down-steps:
foldr foo [] xs
where
foo x yss = (x:zs) : ws
where
(zs, ws) = case yss of
(ys#(y:_)) : rest
| x < y -> (ys,rest)
| otherwise -> ([],yss)
_ -> ([],[])
(It's a bit complicated in order to have the combining function lazy in the second argument, so that it works well for infinite lists too.)
One other way of approaching this task (which, in fact lays the fundamentals of a very efficient sorting algorithm) is using the Continuation Passing Style a.k.a CPS which, in this particular case applied to folding from right; foldr.
As is, this answer would only chunk up the ascending chunks however, it would be nice to chunk up the descending ones at the same time... preferably in reverse order all in O(n) which would leave us with only binary merging of the obtained chunks for a perfectly sorted output. Yet that's another answer for another question.
chunks :: Ord a => [a] -> [[a]]
chunks xs = foldr go return xs $ []
where
go :: Ord a => a -> ([a] -> [[a]]) -> ([a] -> [[a]])
go c f = \ps -> let (r:rs) = f [c]
in case ps of
[] -> r:rs
[p] -> if c > p then (p:r):rs else [p]:(r:rs)
*Main> chunks [4,5,6,7,1,2,3,4,5,6,1,2]
[[4,5,6,7],[1,2,3,4,5,6],[1,2]]
*Main> chunks [4,5,6,7,1,2,3,4,5,4,3,2,6,1,2]
[[4,5,6,7],[1,2,3,4,5],[4],[3],[2,6],[1,2]]
In the above code c stands for current and p is for previous and again, remember we are folding from right so previous, is actually the next item to process.