Count positive values - list

Here is my function. It checks for positive values, changes them to ones and sums them.
countPositive :: [Integer] -> Integer
countPositive xs = foldr (+) 0 $ map (^0) (filter (>0) xs)
Is there a better strategy to count positive values without using length but just foldr, map and filter?

Foldr doesn't seem right here. You want foldl' instead. This is my solution:
countPos :: (Num a, Ord a) => [a] -> Int
countPos = length . filter (> 0)
As you don't want to use length for some reason you would basically just reinvent it:
countPos xs = sum (1 <$ filter (> 0) xs)
or yet another method:
countPos = foldl' (\x _ -> succ x) 0 . filter (> 0)
There are lots and lots of way to do this. If 100 people answer to this post, chances are you get 100 different ways to do it, but the simplest way is to use filter and length.

Sure, just directly count them with foldr:
countPositive = foldr (\n count -> if n > 0 then count + 1 else count) 0
Or reimplement length with foldr:
countPositive = foldr (const succ) 0 . filter (>0)

Related

Breaking a list into sublists of a specified size using foldr

I'm taking a functional programming class and I'm having a hard time leaving the OOP mindset behind and finding answers to a lot of my questions.
I have to create a function that takes an ordered list and converts it into specified size sublists using a variation of fold.
This isn't right, but it's what I have:
splitList :: (Ord a) => Int -> [a] -> [[a]]
splitList size xs
| [condition] = foldr (\item subList -> item:subList) [] xs
| otherwise =
I've been searching and I found out that foldr is the variation that works better for what I want, and I think I've understood how fold works, I just don't know how I'll set up the guards so that when length sublist == size haskell resets the accumulator and goes on to the next list.
If I didn't explain myself correctly, here's the result I want:
> splitList 3 [1..10]
> [[1,2,3],[4,5,6],[7,8,9],[10]]
Thanks!
While Fabián's and chi's answers are entirely correct, there is actually an option to solve this puzzle using foldr. Consider the following code:
splitList :: Int -> [a] -> [[a]]
splitList n =
foldr (\el acc -> case acc of
[] -> [[el]]
(h : t) | length h < n -> (el : h) : t
_ -> [el] : acc
) []
The strategy here is to build up a list by extending its head as long as its length is lesser than desired. This solution has, however, two drawbacks:
It does something slightly different than in your example;
splitList 3 [1..10] produces [[1],[2,3,4],[5,6,7],[8,9,10]]
It's complexity is O(n * length l), as we measure length of up to n–sized list on each of the element which yields linear number of linear operations.
Let's first take care of first issue. In order to start counting at the beginning we need to traverse the list left–to–right, while foldr does it right–to–left. There is a common trick called "continuation passing" which will allow us to reverse the direction of the walk:
splitList :: Int -> [a] -> [[a]]
splitList n l = map reverse . reverse $
foldr (\el cont acc ->
case acc of
[] -> cont [[el]]
(h : t) | length h < n -> cont ((el : h) : t)
_ -> cont ([el] : acc)
) id l []
Here, instead of building the list in the accumulator we build up a function that will transform the list in the right direction. See this question for details. The side effect is reversing the list so we need to counter that by reverse application to the whole list and all of its elements. This goes linearly and tail-recursively tho.
Now let's work on the performance issue. The problem was that the length is linear on casual lists. There are two solutions for this:
Use another structure that caches length for a constant time access
Cache the value by ourselves
Because I guess it is a list exercise, let's go for the latter option:
splitList :: Int -> [a] -> [[a]]
splitList n l = map reverse . reverse . snd $
foldr (\el cont (countAcc, listAcc) ->
case listAcc of
[] -> cont (countAcc, [[el]])
(h : t) | countAcc < n -> cont (countAcc + 1, (el : h) : t)
(h : t) -> cont (1, [el] : (h : t))
) id l (1, [])
Here we extend our computational state with a counter that at each points stores the current length of the list. This gives us a constant check on each element and results in linear time complexity in the end.
A way to simplify this problem would be to split this into multiple functions. There are two things you need to do:
take n elements from the list, and
keep taking from the list as much as possible.
Lets try taking first:
taking :: Int -> [a] -> [a]
taking n [] = undefined
taking n (x:xs) = undefined
If there are no elemensts then we cannot take any more elements so we can only return an empty list, on the other hand if we do have an element then we can think of taking n (x:xs) as x : taking (n-1) xs, we would only need to check that n > 0.
taking n (x:xs)
| n > 0 = x :taking (n-1) xs
| otherwise = []
Now, we need to do that multiple times with the remainder so we should probably also return whatever remains from taking n elements from a list, in this case it would be whatever remains when n = 0 so we could try to adapt it to
| otherwise = ([], x:xs)
and then you would need to modify the type signature to return ([a], [a]) and the other 2 definitions to ensure you do return whatever remained after taking n.
With this approach your splitList would look like:
splitList n [] = []
splitList n l = chunk : splitList n remainder
where (chunk, remainder) = taking n l
Note however that folding would not be appropriate since it "flattens" whatever you are working on, for example given a [Int] you could fold to produce a sum which would be an Int. (foldr :: (a -> b -> b) -> b -> [a] -> b or "foldr function zero list produces an element of the function return type")
You want:
splitList 3 [1..10]
> [[1,2,3],[4,5,6],[7,8,9],[10]]
Since the "remainder" [10] in on the tail, I recommend you use foldl instead. E.g.
splitList :: (Ord a) => Int -> [a] -> [[a]]
splitList size xs
| size > 0 = foldl go [] xs
| otherwise = error "need a positive size"
where go acc x = ....
What should go do? Essentially, on your example, we must have:
splitList 3 [1..10]
= go (splitList 3 [1..9]) 10
= go [[1,2,3],[4,5,6],[7,8,9]] 10
= [[1,2,3],[4,5,6],[7,8,9],[10]]
splitList 3 [1..9]
= go (splitList 3 [1..8]) 9
= go [[1,2,3],[4,5,6],[7,8]] 9
= [[1,2,3],[4,5,6],[7,8,9]]
splitList 3 [1..8]
= go (splitList 3 [1..7]) 8
= go [[1,2,3],[4,5,6],[7]] 8
= [[1,2,3],[4,5,6],[7,8]]
and
splitList 3 [1]
= go [] 1
= [[1]]
Hence, go acc x should
check if acc is empty, if so, produce a singleton list [[x]].
otherwise, check the last list in acc:
if its length is less than size, append x
otherwise, append a new list [x] to acc
Try doing this by hand on your example to understand all the cases.
This will not be efficient, but it will work.
You don't really need the Ord a constraint.
Checking the accumulator's first sublist's length would lead to information flow from the right and the first chunk ending up the shorter one, potentially, instead of the last. Such function won't work on infinite lists either (not to mention the foldl-based variants).
A standard way to arrange for the information flow from the left with foldr is using an additional argument. The general scheme is
subLists n xs = foldr g z xs n
where
g x r i = cons x i (r (i-1))
....
The i argument to cons will guide its decision as to where to add the current element into. The i-1 decrements the counter on the way forward from the left, instead of on the way back from the right. z must have the same type as r and as the foldr itself as a whole, so,
z _ = [[]]
This means there must be a post-processing step, and some edge cases must be handled as well,
subLists n xs = post . foldr g z xs $ n
where
z _ = [[]]
g x r i | i == 1 = cons x i (r n)
g x r i = cons x i (r (i-1))
....
cons must be lazy enough not to force the results of the recursive call prematurely.
I leave it as an exercise finishing this up.
For a simpler version with a pre-processing step instead, see this recent answer of mine.
Just going to give another answer: this is quite similar to trying to write groupBy as a fold, and actually has a couple gotchas w.r.t. laziness that you have to bear in mind for an efficient and correct implementation. The following is the fastest version I found that maintains all the relevant laziness properties:
splitList :: Int -> [a] -> [[a]]
splitList m xs = snd (foldr f (const ([],[])) xs 1)
where
f x a i
| i <= 1 = let (ys,zs) = a m in ([], (x : ys) : zs)
| otherwise = let (ys,zs) = a (i-1) in (x : ys , zs)
The ys and the zs gotten from the recursive processing of the rest of list indicate the first and the rest of the groups into which the rest of the list will be broken up, by said recursive processing. So we either prepend the current element before that first subgroup if it is still shorter than needed, or we prepend before the first subgroup when it is just right and start a new, empty subgroup.

Haskell function that outputs all combinations within the input list that add to the input number

I want to write a function in haskell that takes a list of integers and an integer value as input and outputs a list of all the lists that contain combinations of elements that add up to the input integer.
For example:
myFunc [3,7,5,9,13,17] 30 = [[13,17],[3,5,9,13]]
Attempt:
myFunc :: [Integer] -> Integer -> [[Integer]]
myFunc list sm = case list of
[] -> []
[x]
| x == sm -> [x]
| otherwise -> []
(x : xs)
| x + myFunc xs == sm -> [x] ++ myFunc[xs]
| otherwise -> myFunc xs
My code produces just one combination and that combination must be consecutive, which is not what I want to achieve
Write a function to create all subsets
f [] = [[]]
f (x:xs) = f xs ++ map (x:) (f xs)
then use the filter
filter ((==30) . sum) $ f [3,7,5,9,13,17]
[[13,17],[3,5,9,13]]
as suggested by #Ingo you can prune the list while it's generated, for example
f :: (Num a, Ord a) => [a] -> [[a]]
f [] = [[]]
f (x:xs) = f xs ++ (filter ((<=30) . sum) $ map (x:) $ f xs)
should work faster than generating all 2^N elements.
You can use subsequences from Data.List to give you every possible combination of values, then filter based on your requirement that they add to 30.
myFunc :: [Integer] -> Integer -> [[Integer]]
myFunc list sm =
filter (\x -> sum x == sm) $ subsequences list
An alternative would be to use a right fold:
fun :: (Foldable t, Num a, Eq a) => t a -> a -> [[a]]
fun = foldr go $ \a -> if a == 0 then [[]] else []
where go x f a = f a ++ ((x:) <$> f (a - x))
then,
\> fun [3,7,5,9,13,17] 30
[[13,17],[3,5,9,13]]
\> fun [3,7,5,9,13,17] 12
[[7,5],[3,9]]
An advantage of this approach is that it does not create any lists unless it adds up to the desired value.
Whereas, an approach based on filtering, will create all the possible sub-sequence lists only to drop most of them during filtering step.
Here is an alternate solution idea: Generate a list of lists that sum up to the target number, i.e.:
[30]
[29,1]
[28,2]
[28,1,1]
...
and only then filter the ones that could be build from your given list.
Pro: could be much faster, especially if your input list is long and your target number comparatively small, such that the list of list of summands is much smaller than the list of subsets of your input list.
Con: does only work when 0 is not in the game.
Finally, you can it do both ways and write a function that decides which algorthm will be faster given some input list and the target number.

How can I fold with state in Haskell?

I have a simple function (used for some problems of project Euler, in fact). It turns a list of digits into a decimal number.
fromDigits :: [Int] -> Integer
fromDigits [x] = toInteger x
fromDigits (x:xs) = (toInteger x) * 10 ^ length xs + fromDigits xs
I realized that the type [Int] is not ideal. fromDigits should be able to take other inputs like e.g. sequences, maybe even foldables ...
My first idea was to replace the above code with sort of a "fold with state". What is the correct (= minimal) Haskell-category for the above function?
First, folding is already about carrying some state around. Foldable is precisely what you're looking for, there is no need for State or other monads.
Second, it'd be more natural to have the base case defined on empty lists and then the case for non-empty lists. The way it is now, the function is undefined on empty lists (while it'd be perfectly valid). And notice that [x] is just a shorthand for x : [].
In the current form the function would be almost expressible using foldr. However within foldl the list or its parts aren't available, so you can't compute length xs. (Computing length xs at every step also makes the whole function unnecessarily O(n^2).) But this can be easily avoided, if you re-thing the procedure to consume the list the other way around. The new structure of the function could look like this:
fromDigits' :: [Int] -> Integer
fromDigits' = f 0
where
f s [] = s
f s (x:xs) = f (s + ...) xs
After that, try using foldl to express f and finally replace it with Foldable.foldl.
You should avoid the use of length and write your function using foldl (or foldl'):
fromDigits :: [Int] -> Integer
fromDigits ds = foldl (\s d -> s*10 + (fromIntegral d)) 0 ds
From this a generalization to any Foldable should be clear.
A better way to solve this is to build up a list of your powers of 10. This is quite simple using iterate:
powersOf :: Num a => a -> [a]
powersOf n = iterate (*n) 1
Then you just need to multiply these powers of 10 by their respective values in the list of digits. This is easily accomplished with zipWith (*), but you have to make sure it's in the right order first. This basically just means that you should re-order your digits so that they're in descending order of magnitude instead of ascending:
zipWith (*) (powersOf 10) $ reverse xs
But we want it to return an Integer, not Int, so let's through a map fromIntegral in there
zipWith (*) (powersOf 10) $ map fromIntegral $ reverse xs
And all that's left is to sum them up
fromDigits :: [Int] -> Integer
fromDigits xs = sum $ zipWith (*) (powersOf 10) $ map fromIntegral $ reverse xs
Or for the point-free fans
fromDigits = sum . zipWith (*) (powersOf 10) . map fromIntegral . reverse
Now, you can also use a fold, which is basically just a pure for loop where the function is your loop body, the initial value is, well, the initial state, and the list you provide it is the values you're looping over. In this case, your state is a sum and what power you're on. We could make our own data type to represent this, or we could just use a tuple with the first element being the current total and the second element being the current power:
fromDigits xs = fst $ foldr go (0, 1) xs
where
go digit (s, power) = (s + digit * power, power * 10)
This is roughly equivalent to the Python code
def fromDigits(digits):
def go(digit, acc):
s, power = acc
return (s + digit * power, power * 10)
state = (0, 1)
for digit in digits:
state = go(digit, state)
return state[0]
Such a simple function can carry all its state in its bare arguments. Carry around an accumulator argument, and the operation becomes trivial.
fromDigits :: [Int] -> Integer
fromDigits xs = fromDigitsA xs 0 # 0 is the current accumulator value
fromDigitsA [] acc = acc
fromDigitsA (x:xs) acc = fromDigitsA xs (acc * 10 + toInteger x)
If you're really determined to use a right fold for this, you can combine calculating length xs with the calculation like this (taking the liberty of defining fromDigits [] = 0):
fromDigits xn = let (x, _) = fromDigits' xn in x where
fromDigits' [] = (0, 0)
fromDigits' (x:xn) = (toInteger x * 10 ^ l + y, l + 1) where
(y, l) = fromDigits' xn
Now it should be obvious that this is equivalent to
fromDigits xn = fst $ foldr (\ x (y, l) -> (toInteger x * 10^l + y, l + 1)) (0, 0) xn
The pattern of adding an extra component or result to your accumulator, and discarding it once the fold returns, is a very general one when you're re-writing recursive functions using folds.
Having said that, a foldr with a function that is always strict in its second parameter is a really, really bad idea (excessive stack usage, maybe a stack overflow on long lists) and you really should write fromDigits as a foldl as some of the other answers have suggested.
If you want to "fold with state", probably Traversable is the abstraction you're looking for. One of the methods defined in Traversable class is
traverse :: Applicative f => (a -> f b) -> t a -> f (t b)
Basically, traverse takes a "stateful function" of type a -> f b and applies it to every function in the container t a, resulting in a container f (t b). Here, f can be State, and you can use traverse with function of type Int -> State Integer (). It would build an useless data structure (list of units in your case), but you can just discard it. Here's a solution to your problem using Traversable:
import Control.Monad.State
import Data.Traversable
sumDigits :: Traversable t => t Int -> Integer
sumDigits cont = snd $ runState (traverse action cont) 0
where action x = modify ((+ (fromIntegral x)) . (* 10))
test1 = sumDigits [1, 4, 5, 6]
However, if you really don't like building discarded data structure, you can just use Foldable with somewhat tricky Monoid implementation: store not only computed result, but also 10^n, where n is count of digits converted to this value. This additional information gives you an ability to combine two values:
import Data.Foldable
import Data.Monoid
data Digits = Digits
{ value :: Integer
, power :: Integer
}
instance Monoid Digits where
mempty = Digits 0 1
(Digits d1 p1) `mappend` (Digits d2 p2) =
Digits (d1 * p2 + d2) (p1 * p2)
sumDigitsF :: Foldable f => f Int -> Integer
sumDigitsF cont = value $ foldMap (\x -> Digits (fromIntegral x) 10) cont
test2 = sumDigitsF [0, 4, 5, 0, 3]
I'd stick with first implementation. Although it builds unnecessary data structure, it's shorter and simpler to understand (as far as a reader understands Traversable).

Better way to solve this [Int] -> Int -> Int

Here is an sample problem I'm working upon:
Example Input: test [4, 1, 5, 6] 6 returns 5
I'm solving this using this function:
test :: [Int] -> Int -> Int
test [] _ = 0
test (x:xs) time = if (time - x) < 0
then x
else test xs $ time - x
Any better way to solve this function (probably using any inbuilt higher order function) ?
How about
test xs time = maybe 0 id . fmap snd . find ((>time) . fst) $ zip sums xs
where sums = scanl1 (+) xs
or equivalently with that sugary list comprehension
test xs time = headDef 0 $ [v | (s, v) <- zip sums xs, s > time]
where sums = scanl1 (+) xs
headDef is provided by safe. It's trivial to implement (f _ (x:_) = x; f x _ = x) but the safe package has loads of useful functions like these so it's good to check out.
Which sums the list up to each point and finds the first occurence greater than time. scanl is a useful function that behaves like foldl but keeps intermediate results and zip zips two lists into a list of tuples. Then we just use fmap and maybe to manipulate the Maybe (Integer, Integer) to get our result.
This defaults to 0 like yours but I like the version that simply goes to Maybe Integer better from a user point of view, to get this simply remove the maybe 0 id.
You might like scanl and its close relative, scanl1. For example:
test_ xs time = [curr | (curr, tot) <- zip xs (scanl1 (+) xs), tot > time]
This finds all the places where the running sum is greater than time. Then you can pick the first one (or 0) like this:
safeHead def xs = head (xs ++ [def])
test xs time = safeHead 0 (test_ xs time)
This is verbose, and I don't necessarily recommend writing such a simple function like this (IMO the pattern matching & recursion is plenty clear). But, here's a pretty declarative pipeline:
import Control.Error
import Data.List
deadline :: (Num a, Ord a) => a -> [a] -> a
deadline time = fromMaybe 0 . findDeadline time
findDeadline :: (Num a, Ord a) => a -> [a] -> Maybe a
findDeadline time xs = decayWithDifferences time xs
>>= findIndex (< 0)
>>= atMay xs
decayWithDifferences :: Num b => b -> [b] -> Maybe [b]
decayWithDifferences time = tailMay . scanl (-) time
-- > deadline 6 [4, 1, 5, 6]
-- 5
This documents the code a bit and in principle lets you test a little better, though IMO these functions fit more-or-less into the 'obviously correct' category.
You can verify that it matches your implementation:
import Test.QuickCheck
prop_equality :: [Int] -> Int -> Bool
prop_equality time xs = test xs time == deadline time xs
-- > quickCheck prop_equality
-- +++ OK, passed 100 tests.
In this particular case zipping suggested by others in not quite necessary:
test xs time = head $ [y-x | (x:y:_) <- tails $ scanl1 (+) $ 0:xs, y > time]++[0]
Here scanl1 will produce a list of rolling sums of the list xs, starting it with 0. Therefore, tails will produce a list with at least one list having two elements for non-empty xs. Pattern-matching (x:y:_) extracts two elements from each tail of rolling sums, so in effect it enumerates pairs of neighbouring elements in the list of rolling sums. Filtering on the condition, we reconstruct a part of the list that starts with the first element that produces a rolling sum greater than time. Then use headDef 0 as suggested before, or append a [0], so that head always returns something.
If you want to retain readability, I would just stick with your current solution. It's easy to understand, and isn't doing anything wrong.
Just because you can make it into a one line scan map fold mutant doesn't mean that you should!

all possibilities of dividing a list in two in Haskell

What's the most direct/efficient way to create all possibilities of dividing one (even) list into two in Haskell? I toyed with splitting all permutations of the list but that would add many extras - all the instances where each half contains the same elements, just in a different order. For example,
[1,2,3,4] should produce something like:
[ [1,2], [3,4] ]
[ [1,3], [2,4] ]
[ [1,4], [2,3] ]
Edit: thank you for your comments -- the order of elements and the type of the result is less important to me than the concept - an expression of all two-groups from one group, where element order is unimportant.
Here's an implementation, closely following the definition.
The first element always goes into the left group. After that, we add the next head element into one, or the other group. If one of the groups becomes too big, there is no choice anymore and we must add all the rest into the the shorter group.
divide :: [a] -> [([a], [a])]
divide [] = [([],[])]
divide (x:xs) = go ([x],[], xs, 1,length xs) []
where
go (a,b, [], i,j) zs = (a,b) : zs -- i == lengh a - length b
go (a,b, s#(x:xs), i,j) zs -- j == length s
| i >= j = (a,b++s) : zs
| (-i) >= j = (a++s,b) : zs
| otherwise = go (x:a, b, xs, i+1, j-1) $ go (a, x:b, xs, i-1, j-1) zs
This produces
*Main> divide [1,2,3,4]
[([2,1],[3,4]),([3,1],[2,4]),([1,4],[3,2])]
The limitation of having an even length list is unnecessary:
*Main> divide [1,2,3]
[([2,1],[3]),([3,1],[2]),([1],[3,2])]
(the code was re-written in the "difference-list" style for efficiency: go2 A zs == go1 A ++ zs).
edit: How does this work? Imagine yourself sitting at a pile of stones, dividing it into two. You put the first stone to a side, which one it doesn't matter (so, left, say). Then there's a choice where to put each next stone — unless one of the two piles becomes too small by comparison, and we thus must put all the remaining stones there at once.
To find all partitions of a non-empty list (of even length n) into two equal-sized parts, we can, to avoid repetitions, posit that the first element shall be in the first part. Then it remains to find all ways to split the tail of the list into one part of length n/2 - 1 and one of length n/2.
-- not to be exported
splitLen :: Int -> Int -> [a] -> [([a],[a])]
splitLen 0 _ xs = [([],xs)]
splitLen _ _ [] = error "Oops"
splitLen k l ys#(x:xs)
| k == l = [(ys,[])]
| otherwise = [(x:us,vs) | (us,vs) <- splitLen (k-1) (l-1) xs]
++ [(us,x:vs) | (us,vs) <- splitLen k (l-1) xs]
does that splitting if called appropriately. Then
partitions :: [a] -> [([a],[a])]
partitions [] = [([],[])]
partitions (x:xs)
| even len = error "Original list with odd length"
| otherwise = [(x:us,vs) | (us,vs) <- splitLen half len xs]
where
len = length xs
half = len `quot` 2
generates all the partitions without redundantly computing duplicates.
luqui raises a good point. I haven't taken into account the possibility that you'd want to split lists with repeated elements. With those, it gets a little more complicated, but not much. First, we group the list into equal elements (done here for an Ord constraint, for only Eq, that could still be done in O(length²)). The idea is then similar, to avoid repetitions, we posit that the first half contains more elements of the first group than the second (or, if there is an even number in the first group, equally many, and similar restrictions hold for the next group etc.).
repartitions :: Ord a => [a] -> [([a],[a])]
repartitions = map flatten2 . halves . prepare
where
flatten2 (u,v) = (flatten u, flatten v)
prepare :: Ord a => [a] -> [(a,Int)]
prepare = map (\xs -> (head xs, length xs)) . group . sort
halves :: [(a,Int)] -> [([(a,Int)],[(a,Int)])]
halves [] = [([],[])]
halves ((a,k):more)
| odd total = error "Odd number of elements"
| even k = [((a,low):us,(a,low):vs) | (us,vs) <- halves more] ++ [normalise ((a,c):us,(a,k-c):vs) | c <- [low + 1 .. min half k], (us,vs) <- choose (half-c) remaining more]
| otherwise = [normalise ((a,c):us,(a,k-c):vs) | c <- [low + 1 .. min half k], (us,vs) <- choose (half-c) remaining more]
where
remaining = sum $ map snd more
total = k + remaining
half = total `quot` 2
low = k `quot` 2
normalise (u,v) = (nz u, nz v)
nz = filter ((/= 0) . snd)
choose :: Int -> Int -> [(a,Int)] -> [([(a,Int)],[(a,Int)])]
choose 0 _ xs = [([],xs)]
choose _ _ [] = error "Oops"
choose need have ((a,k):more) = [((a,c):us,(a,k-c):vs) | c <- [least .. most], (us,vs) <- choose (need-c) (have-k) more]
where
least = max 0 (need + k - have)
most = min need k
flatten :: [(a,Int)] -> [a]
flatten xs = xs >>= uncurry (flip replicate)
Daniel Fischer's answer is a good way to solve the problem. I offer a worse (more inefficient) way, but one which more obviously (to me) corresponds to the problem description. I will generate all partitions of the list into two equal length sublists, then filter out equivalent ones according to your definition of equivalence. The way I usually solve problems is by starting like this -- create a solution that is as obvious as possible, then gradually transform it into a more efficient one (if necessary).
import Data.List (sort, nubBy, permutations)
type Partition a = ([a],[a])
-- Your notion of equivalence (sort to ignore the order)
equiv :: (Ord a) => Partition a -> Partition a -> Bool
equiv p q = canon p == canon q
where
canon (xs,ys) = sort [sort xs, sort ys]
-- All ordered partitions
partitions :: [a] -> [Partition a]
partitions xs = map (splitAt l) (permutations xs)
where
l = length xs `div` 2
-- All partitions filtered out by the equivalence
equivPartitions :: (Ord a) => [a] -> [Partition a]
equivPartitions = nubBy equiv . partitions
Testing
>>> equivPartitions [1,2,3,4]
[([1,2],[3,4]),([3,2],[1,4]),([3,1],[2,4])]
Note
After using QuickCheck to test the equivalence of this implementation with Daniel's, I found an important difference. Clearly, mine requires an (Ord a) constraint and his does not, and this hints at what the difference would be. In particular, if you give his [0,0,0,0], you will get a list with three copies of ([0,0],[0,0]), whereas mine will give only one copy. Which of these is correct was not specified; Daniel's is natural when considering the two output lists to be ordered sequences (which is what that type is usually considered to be), mine is natural when considering them as sets or bags (which is how this question seemed to be treating them).
Splitting The Difference
It is possible to get from an implementation that requires Ord to one that doesn't, by operating on the positions rather than the values in a list. I came up with this transformation -- an idea which I believe originates with Benjamin Pierce in his work on bidirectional programming.
import Data.Traversable
import Control.Monad.Trans.State
data Labelled a = Labelled { label :: Integer, value :: a }
instance Eq (Labelled a) where
a == b = compare a b == EQ
instance Ord (Labelled a) where
compare a b = compare (label a) (label b)
labels :: (Traversable t) => t a -> t (Labelled a)
labels t = evalState (traverse trav t) 0
where
trav x = state (\i -> i `seq` (Labelled i x, i + 1))
onIndices :: (Traversable t, Functor u)
=> (forall a. Ord a => t a -> u a)
-> forall b. t b -> u b
onIndices f = fmap value . f . labels
Using onIndices on equivPartitions wouldn't speed it up at all, but it would allow it to have the same semantics as Daniel's (up to equiv of the results) without the constraint, and with my more naive and obvious way of expressing it -- and I just thought it was an interesting way to get rid of the constraint.
My own generalized version, added much later, inspired by Will's answer:
import Data.Map (adjust, fromList, toList)
import Data.List (groupBy, sort)
divide xs n evenly = divide' xs (zip [0..] (replicate n [])) where
evenPSize = div (length xs) n
divide' [] result = [result]
divide' (x:xs) result = do
index <- indexes
divide' xs (toList $ adjust (x :) index (fromList result)) where
notEmptyBins = filter (not . null . snd) $ result
partlyFullBins | evenly == "evenly" = map fst . filter ((<evenPSize) . length . snd) $ notEmptyBins
| otherwise = map fst notEmptyBins
indexes = partlyFullBins
++ if any (null . snd) result
then map fst . take 1 . filter (null . snd) $ result
else if null partlyFullBins
then map fst. head . groupBy (\a b -> length (snd a) == length (snd b)) . sort $ result
else []