Partitioning a list fairly - list

I'm sure this is a reasonably common thing but I can't find anything on it (my internet-search-fu is not strong).
I have a function that can group a list into a list of lists of N elements each, with the final sublist being smaller than N if the length of the list is not evenly divisible by N. Some examples:
groupEvery 2 [1,2,3,4] = [[1,2],[3,4]]
groupEvery 4 [1,2,3,4,5,6,7,8,9,10] = [[1,2,3,4], [5,6,7,8], [9,10]]
What I want is to take a list and a positive integer n (in the above examples n could be said to be 2 and 3) and partition it into a new list of n lists. It should work on a list of any type, and produce sublists whose sizes differ as little as possible.
So I would like to have:
fairPartition 3 [1,2,3,4,5,6,7,8,9,10] = [[1,2,3,4], [5,6,7], [8,9,10]]
Or any combination of sublists as long as there are two of length 3 and one of length 4.
A naive attempt using groupEvery:
fairPartition :: Int -> [a] -> [[a]]
fairPartition n xs = groupEvery ((length xs `div` n) + 1) xs
fairPartition 4 [1..10] = [[1,2,3],[4,5,6],[7,8,9],[10]]
but as you can see (3,3,3,1) is not a fair distribution of lengths, and for lists of smaller lengths it doesn't even return the right number of sublists:
# Haskell, at GHCi
*Main> let size = 4 in map (\l -> length . fairPartition 4 $ [1..l]) [size..25]
[2,3,3,4,3,3,4,4,3,4,4,4,4,4,4,4,4,4,4,4,4,4]
I would like a {pseudo,actual}-code function or explanation that is easily translatable to Haskell (the identity translation would be the best!).
Thanks.

You can use the split package's splitPlaces function for this.
import Data.List.Split
fairPartition n xs = case length xs `quotRem` n of
(q, r) -> splitPlaces (replicate r (q+1) ++ replicate (n-r) q) xs

Related

How to calculate percentage growth in haskell

I am running into a wall when I'm trying to calculate % growth from a list of integers to a list of floats. I am new to Haskell with very little experience and not sure where or what to search for my problem thus I came here. :)
function :: [Ints] -> [Floats]
I have a list.
nums = [4561,3241,2345,3455,4567]
I need to iterate through the list and calculate the % percentage growth from nums[4] to nums[0]
and then output a list of floats. But I am not sure whether to use a map function or some other method to solve this. I need some way to retrieve two numbers from the list i.e. nums[4] and nums[3] and calculate the % growth.
I'm not sure what you need but if it's the growth from (n-1)th to the nth element you can do it like this:
growth :: [Int] -> [Float]
growths xs =
zipWith (\n n' -> fromIntegral (n'-n) / fromIntegral n * 100) xs (tail xs)
yielding
> growths [4561,3241,2345,3455,4567]
[-28.941021,-27.645788,47.334755,32.185238]
-- this one makes it a bit more obvious what is happening
> growths [1,2,3,4,5]
[100.0,50.0,33.333336,25.0]
I probably got the formula wrong and you want some other formula but the idea should be clear as long as you want to compare consecutive numbers.
The trick zipWith f xs (tail xs) is often useful to work with consecutive elements in a list. Here the first argument to f is the (n-1)th and the second is the nth element in the list.
if you need from n to n-1th (as you kind of seem to imply) just flip the arguments:
growths xs =
zipWith (\n' n -> fromIntegral (n'-n) / fromIntegral n * 100) xs (tail xs)
> growths [4561,3241,2345,3455,4567]
[40.72817,38.208954,-32.12735,-24.348589]
> growths [1,2,3,4,5]
[-50.0,-33.333336,-25.0,-20.0]

Haskell - split a list into two sublists with closest sums

I'm a Haskell beginner trying to learn more about the language by solving some online quizzes/problem sets.
The problem/question is quite lengthy but a part of it requires code that can find the number which divides a given list into two (nearly) equal (by sum) sub-lists.
Given [1..10]
Answer should be 7 since 1+2+..7 = 28 & 8+9+10 = 27
This is the way I implemented it
-- partitions list by y
partishner :: (Floating a) => Int -> [a] -> [[[a]]]
partishner 0 xs = [[xs],[]]
partishner y xs = [take y xs : [drop y xs]] ++ partishner (y - 1) xs
-- finds the equal sum
findTheEquilizer :: (Ord a, Floating a) => [a] -> [[a]]
findTheEquilizer xs = fst $ minimumBy (comparing snd) zipParty
where party = (tail . init) (partishner (length xs) xs) -- removes [xs,[]] types
afterParty = (map (\[x, y] -> (x - y) ** 2) . init . map (map sum)) party
zipParty = zip party afterParty -- zips partitions and squared diff betn their sums
Given (last . head) (findTheEquilizer [1..10])
output : 7
For numbers near 50k it works fine
λ> (last . head) (findTheEquilizer [1..10000])
7071.0
The trouble starts when I put in lists with any more than 70k elements in it. It takes forever to compute.
So what do I have to change in the code to make it run better or do I have to change my whole approach? I'm guessing it's the later, but I'm not sure how to go about do that.
It looks to me that the implementation is quite chaotic. For example partishner seems to construct a list of lists of lists of a, where, given I understood it correctly, the outer list contains lists with each two elements: the list of elements on "the left", and the list of elements at the "right". As a result, this will take O(n2) to construct the lists.
By using lists over 2-tuples, this is also quite "unsafe", since a list can - although here probably impossible - contain no elements, one element, or more than two elements. If you make a mistake in one of the functions, it will be hard to find out that mistake.
It looks to me that it might be easier to implement a "sweep algorithm": we first calculate the sum of all the elements in the list. This is the value on the "right" in case we decide to split at that specific point, next we start moving from left to right, each time subtracting the element from the sum on the right, and adding it to the sum on the left. We can each time evaluate the difference in score, like:
import Data.List(unfoldr)
sweep :: Num a => [a] -> [(Int, a, [a])]
sweep lst = x0 : unfoldr f x0
where x0 = (0, sum lst, lst)
f (_, _, []) = Nothing
f (i, r, (x: xs)) = Just (l, l)
where l = (i+1, r-2*x, xs)
For example:
Prelude Data.List> sweep [1,4,2,5]
[(0,12,[1,4,2,5]),(1,10,[4,2,5]),(2,2,[2,5]),(3,-2,[5]),(4,-12,[])]
So if we select to split at the first split point (before the first element), the sum on the right is 12 higher than the sum on the left, if we split after the first element, the sum on the right (11) is 10 higher than the sum on the left (1).
We can then obtain the minimum of these splits with minimumBy :: (a -> a -> Ordering) -> [a] -> a:
import Data.List(minimumBy)
import Data.Ord(comparing)
findTheEquilizer :: (Ord a, Num a) => [a] -> ([a], [a])
findTheEquilizer lst = (take idx lst, tl)
where (idx, _, tl) = minimumBy (comparing (abs . \(_, x, _) -> x)) (sweep lst)
We then obtain the correct value for [1..10]:
Prelude Data.List Data.Ord Data.List> findTheEquilizer [1..10]
([1,2,3,4,5,6,7],[8,9,10])
or for 70'000:
Prelude Data.List Data.Ord Data.List> head (snd (findTheEquilizer [1..70000]))
49498
The above is not ideal, it can be implemented more elegantly, but I leave this as an exercise.
Okay, firstly, let analyse why it run forever (...actually not forever, just slow), take a look of partishner function:
partishner y xs = [take y xs : [drop y xs]] ++ partishner (y - 1) xs
where take y xs and drop y xs are run linear time, i.e. O(N), and so as
[take y xs : [drop y xs]]
is O(N) too.
However, it is run again and again in recursive way over each element of given list. Now suppose the length of given list is M, each call of partishner function take O(N) times, to finish computation need:
O(1+2+...M) = (M(1+M)/2) ~ O(M^2)
Now, the list has 70k elements, it at least need 70k ^ 2 step. So why it hang.
Instead of using partishner function, you can sum the list in linear way as:
sumList::(Floating a)=>[a]->[a]
sumList xs = sum 0 xs
where sum _ [] = []
sum s (y:ys) = let s' = s + y in s' : sum s' ys
and findEqilizer just sum the given list from left to right (leftSum) and from right to left (rightSum) and take the result just as your original program, but the whole process just take linear time.
findEquilizer::(Ord a, Floating a) => [a] -> a
findEquilizer [] = 0
findEquilizer xs =
let leftSum = reverse $ 0:(sumList $ init xs)
rightSum = sumList $ reverse $ xs
afterParty = zipWith (\x y->(x-y) ** 2) leftSum rightSum
in fst $ minimumBy (comparing snd) (zip (reverse $ init xs) afterParty)
I assume that none of the list elements are negative, and use a "tortoise and hare" approach. The hare steps through the list, adding up elements. The tortoise does the same thing, but it keeps its sum doubled and it carefully ensures that it only takes a step when that step won't put it ahead of the hare.
approxEqualSums
:: (Num a, Ord a)
=> [a] -> (Maybe a, [a])
approxEqualSums as0 = stepHare 0 Nothing as0 0 as0
where
-- ht is the current best guess.
stepHare _tortoiseSum ht tortoise _hareSum []
= (ht, tortoise)
stepHare tortoiseSum ht tortoise hareSum (h:hs)
= stepTortoise tortoiseSum ht tortoise (hareSum + h) hs
stepTortoise tortoiseSum ht [] hareSum hare
= stepHare tortoiseSum ht [] hareSum hare
stepTortoise tortoiseSum ht tortoise#(t:ts) hareSum hare
| tortoiseSum' <= hareSum
= stepTortoise tortoiseSum' (Just t) ts hareSum hare
| otherwise
= stepHare tortoiseSum ht tortoise hareSum hare
where tortoiseSum' = tortoiseSum + 2*t
In use:
> approxEqualSums [1..10]
(Just 6,[7,8,9,10])
6 is the last element before going over half, and 7 is the first one after that.
I asked in the comment and OP says [1..n] is not really defining the question. Yes i guess what's asked is like [1 -> n] in random ascending sequence such as [1,3,7,19,37,...,1453,...,n].
Yet..! Even as per the given answers, for a list like [1..n] we really don't need to do any list operation at all.
The sum of [1..n] is n*(n+1)/2.
Which means we need to find m for n*(n+1)/4
Which means m(m+1)/2 = n*(n+1)/4.
So if n == 100 then m^2 + m - 5050 = 0
All we need is
formula where a = 1, b = 1 and c = -5050 yielding the reasonable root to be 70.565 ⇒ 71 (rounded). Lets check. 71*72/2 = 2556 and 5050-2556 = 2494 which says 2556 - 2494 = 62 minimal difference (<71). Yes we must split at 71. So just do like result = [[1..71],[72..100]] over..!
But when it comes to not subsequent ascending, that's a different animal. It has to be done by first finding the sum and then like binary search by jumping halfway the list and comparing the sums to decide whether to jump halfway back or forward accordingly. I will implement that one later.
Here's a code which is empirically behaving better than linear, and gets to the 2,000,000 in just over 1 second even when interpreted:
g :: (Ord c, Num c) => [c] -> [(Int, c)]
g = head . dropWhile ((> 0) . snd . last) . map (take 2) . tails . zip [1..]
. (\xs -> zipWith (-) (map (last xs -) xs) xs) . scanl1 (+)
g [1..10] ==> [(6,13),(7,-1)] -- 0.0s
g [1..70000] ==> [(49497,32494),(49498,-66502)] -- 0.09s
g [70000,70000-1..1] ==> [(20502,66502),(20503,-32494)] -- 0.09s
g [1..100000] ==> [(70710,75190),(70711,-66232)] -- 0.11s
g [1..1000000] ==> [(707106,897658),(707107,-516556)] -- 0.62s
g [1..2000000] ==> [(1414213,1176418),(1414214,-1652010)] -- 1.14s n^0.88
g [1..3000000] ==> [(2121320,836280),(2121321,-3406362)] -- 1.65s n^0.91
It works by running the partial sums with scanl1 (+) and taking the total sum as its last, so that for each partial sum, subtracting it from the total gives us the sum of the second part of the split.
The algorithm assumes all the numbers in the input list are strictly positive, so the partial sums list is monotonically increasing. Nothing else is assumed about the numbers.
The value must be chosen from the pair (the g's result) so that its second component's absolute value is the smaller between the two.
This is achieved by minimumBy (comparing (abs . snd)) . g.
clarifications: There's some confusion about "complexity" in the comments below, yet the answer says nothing at all about complexity but uses a specific empirical measurement. You can't argue with empirical data (unless you misinterpret its meaning).
The answer does not claim it "is better than linear", it says "it behaves better than linear" [in the tested range of problem sizes], which the empirical data incontrovertibly show.
Finally, an appeal to authority. Robert Sedgewick is an authority on algorithms. Take it up with him.
(and of course the algorithm handles unordered data as well as it does ordered).
As for the reasons for OP's code inefficiency: map sum . inits can't help being quadratic, but the equivalent scanl (+) 0 is linear. The radical improvement comes about from a lot of redundant calculations in the former being avoided in the latter. (Another example of this can be seen here.)

Split list in tuple with lists of length 2^0, 2^1, ... , 2^N using Haskell

I'm trying to solve Haskell problem, but I don't have any clue where to start.
I need to split list in lists with length of 2^0, 2^1, 2^3 .. elements.
So if we have list [1,2,3,4,5,6,7,8,9,10,11,12,13] after using our function we should have result [[1],[2,3],[4,5,6,7],[8,9,10,11,12,13]]
You can use the function: splitAt :: Int -> [a] -> ([a],[a]), and then use recursion:
blocks :: Int -> [a] -> [[a]]
blocks _ [] = []
blocks n ls = la : blocks (2*n) lb
where ~(la,lb) = splitAt n ls
So in the case we have a blocks 1 [1,2,3,4,5,6] we will obtain [[1],[2,3],[4,5,6]]. In the first case, we look if the list we give to blocks is empty, in which case, there is nothing to split, so we return the empty list. In the recursive case, we splitAt the ls list into la and lb. The la is our first list, and the lb we need to split further. We do the recursion with n*2 as new split length to ensure that the length of the lists will increase like powers of two.
Maybe you can also use zip and groupBy. Seems to be working, but is not so straightforward.
import Data.List
a="Hello World!"
p=[2^n| n<-[0..]]
pa=take (length a) p
b=[elem n pa| n<-[1..length a]]
c=zip a b
d=groupBy (\x y->snd y==False) c
e=map (map (\x->fst x)) d

Haskell: List Boundary

I have a list of doubles(myList), which I want to add to a new List (someList), but once the new list reaches a set size i.e. 25, I want to stop adding to it. I have tried implementing this function using sum but was unsuccessful. Example code below.
someList = [(a)| a <- myList, sum someList < 30]
The way #DanielFischer phrased the question is compatible with the Haskell way of thinking.
Do you want someList to be the longest prefix of myList that has a sum < 30?
Here's how I'd approach it: let's say our list is
>>> let list = [1..20]
we can find the "cumulative sums" using:
>>> let sums = tail . scanl (+) 0
>>> sums list
[1,3,6,10,15,21,28,36,45,55,66,78,91,105,120,136,153,171,190,210]
Now zip that with the original list to get pairs of elements with the sum up to that point
>>> zip list (sums list)
[(1,1),(2,3),(3,6),(4,10),(5,15),(6,21),(7,28),(8,36),
(9,45),(10,55),(11,66),(12,78),(13,91),(14,105),(15,120),
(16,136),(17,153),(18,171),(19,190),(20,210)]
Then we can takeWhile this list to get the prefix we want:
>>> takeWhile (\x -> snd x < 30) (zip list (sums list))
[(1,1),(2,3),(3,6),(4,10),(5,15),(6,21),(7,28)]
finally we can get rid of the cumulative sums that we used to perform this calculation:
>>> map fst (takeWhile (\x -> snd x < 30) (zip list (sums list)))
[1,2,3,4,5,6,7]
Note that because of laziness, this is as efficient as the recursive solutions -- only the sums up to the point where they fail the test need to be calculated. This can be seen because the solution works on infinite lists (because if we needed to calculate all the sums, we would never finish).
I'd probably abstract this and take the limit as a parameter:
>>> :{
... let initial lim list =
... map fst (takeWhile (\x -> snd x < lim) (zip list (sums list)))
... :}
This function has an obvious property it should satisfy, namely that the sum of a list should always be less than the limit (as long as the limit is greater than 0). So we can use QuickCheck to make sure we did it right:
>>> import Test.QuickCheck
>>> quickCheck (\lim list -> lim > 0 ==> sum (initial lim list) < lim)
+++ OK, passed 100 tests.
someList = makeList myList [] 0 where
makeList (x:xs) ys total = let newTot = total + x
in if newTot >= 25
then ys
else makeList xs (ys ++ [x]) newTot
This takes elements from myList as long as their sum is less than 25.
The logic takes place in makeList. It takes the first element of the input list and adds it to the running total, to see if it's greater than 25. If it is, we shouldn't add it to the output list, and we finish recursing. Otherwise, we put x on the end of the output list (ys) and keep going with the rest of the input list.
The behaviour you want is
ghci> appendWhileUnder 25 [1..5] [1..5]
[1,2,3,4,5,1,2,3]
because that sums to 21 and adding the 4 would bring it to 25.
OK, one way to go about this is by just appending them with ++ then taking the initial segment that's under 25.
appendWhileUnder n xs ys = takeWhileUnder n (xs++ys)
I don't want to keep summing intermediate lists, so I'll keep track with how much I'm allowed (n).
takeWhileUnder n [] = []
takeWhileUnder n (x:xs) | x < n = x:takeWhileUnder (n-x) xs
| otherwise = []
Here I allow x through if it doesn't take me beyond what's left of my allowance.
Possibly undesired side effect: it'll chop out bits of the original list if it sums to over 25. Workaround: use
appendWhileUnder' n xs ys = xs ++ takeWhileUnder (n - sum xs)
which keeps the entire xs whether it brings you over n or not.

all possibilities of dividing a list in two in Haskell

What's the most direct/efficient way to create all possibilities of dividing one (even) list into two in Haskell? I toyed with splitting all permutations of the list but that would add many extras - all the instances where each half contains the same elements, just in a different order. For example,
[1,2,3,4] should produce something like:
[ [1,2], [3,4] ]
[ [1,3], [2,4] ]
[ [1,4], [2,3] ]
Edit: thank you for your comments -- the order of elements and the type of the result is less important to me than the concept - an expression of all two-groups from one group, where element order is unimportant.
Here's an implementation, closely following the definition.
The first element always goes into the left group. After that, we add the next head element into one, or the other group. If one of the groups becomes too big, there is no choice anymore and we must add all the rest into the the shorter group.
divide :: [a] -> [([a], [a])]
divide [] = [([],[])]
divide (x:xs) = go ([x],[], xs, 1,length xs) []
where
go (a,b, [], i,j) zs = (a,b) : zs -- i == lengh a - length b
go (a,b, s#(x:xs), i,j) zs -- j == length s
| i >= j = (a,b++s) : zs
| (-i) >= j = (a++s,b) : zs
| otherwise = go (x:a, b, xs, i+1, j-1) $ go (a, x:b, xs, i-1, j-1) zs
This produces
*Main> divide [1,2,3,4]
[([2,1],[3,4]),([3,1],[2,4]),([1,4],[3,2])]
The limitation of having an even length list is unnecessary:
*Main> divide [1,2,3]
[([2,1],[3]),([3,1],[2]),([1],[3,2])]
(the code was re-written in the "difference-list" style for efficiency: go2 A zs == go1 A ++ zs).
edit: How does this work? Imagine yourself sitting at a pile of stones, dividing it into two. You put the first stone to a side, which one it doesn't matter (so, left, say). Then there's a choice where to put each next stone — unless one of the two piles becomes too small by comparison, and we thus must put all the remaining stones there at once.
To find all partitions of a non-empty list (of even length n) into two equal-sized parts, we can, to avoid repetitions, posit that the first element shall be in the first part. Then it remains to find all ways to split the tail of the list into one part of length n/2 - 1 and one of length n/2.
-- not to be exported
splitLen :: Int -> Int -> [a] -> [([a],[a])]
splitLen 0 _ xs = [([],xs)]
splitLen _ _ [] = error "Oops"
splitLen k l ys#(x:xs)
| k == l = [(ys,[])]
| otherwise = [(x:us,vs) | (us,vs) <- splitLen (k-1) (l-1) xs]
++ [(us,x:vs) | (us,vs) <- splitLen k (l-1) xs]
does that splitting if called appropriately. Then
partitions :: [a] -> [([a],[a])]
partitions [] = [([],[])]
partitions (x:xs)
| even len = error "Original list with odd length"
| otherwise = [(x:us,vs) | (us,vs) <- splitLen half len xs]
where
len = length xs
half = len `quot` 2
generates all the partitions without redundantly computing duplicates.
luqui raises a good point. I haven't taken into account the possibility that you'd want to split lists with repeated elements. With those, it gets a little more complicated, but not much. First, we group the list into equal elements (done here for an Ord constraint, for only Eq, that could still be done in O(length²)). The idea is then similar, to avoid repetitions, we posit that the first half contains more elements of the first group than the second (or, if there is an even number in the first group, equally many, and similar restrictions hold for the next group etc.).
repartitions :: Ord a => [a] -> [([a],[a])]
repartitions = map flatten2 . halves . prepare
where
flatten2 (u,v) = (flatten u, flatten v)
prepare :: Ord a => [a] -> [(a,Int)]
prepare = map (\xs -> (head xs, length xs)) . group . sort
halves :: [(a,Int)] -> [([(a,Int)],[(a,Int)])]
halves [] = [([],[])]
halves ((a,k):more)
| odd total = error "Odd number of elements"
| even k = [((a,low):us,(a,low):vs) | (us,vs) <- halves more] ++ [normalise ((a,c):us,(a,k-c):vs) | c <- [low + 1 .. min half k], (us,vs) <- choose (half-c) remaining more]
| otherwise = [normalise ((a,c):us,(a,k-c):vs) | c <- [low + 1 .. min half k], (us,vs) <- choose (half-c) remaining more]
where
remaining = sum $ map snd more
total = k + remaining
half = total `quot` 2
low = k `quot` 2
normalise (u,v) = (nz u, nz v)
nz = filter ((/= 0) . snd)
choose :: Int -> Int -> [(a,Int)] -> [([(a,Int)],[(a,Int)])]
choose 0 _ xs = [([],xs)]
choose _ _ [] = error "Oops"
choose need have ((a,k):more) = [((a,c):us,(a,k-c):vs) | c <- [least .. most], (us,vs) <- choose (need-c) (have-k) more]
where
least = max 0 (need + k - have)
most = min need k
flatten :: [(a,Int)] -> [a]
flatten xs = xs >>= uncurry (flip replicate)
Daniel Fischer's answer is a good way to solve the problem. I offer a worse (more inefficient) way, but one which more obviously (to me) corresponds to the problem description. I will generate all partitions of the list into two equal length sublists, then filter out equivalent ones according to your definition of equivalence. The way I usually solve problems is by starting like this -- create a solution that is as obvious as possible, then gradually transform it into a more efficient one (if necessary).
import Data.List (sort, nubBy, permutations)
type Partition a = ([a],[a])
-- Your notion of equivalence (sort to ignore the order)
equiv :: (Ord a) => Partition a -> Partition a -> Bool
equiv p q = canon p == canon q
where
canon (xs,ys) = sort [sort xs, sort ys]
-- All ordered partitions
partitions :: [a] -> [Partition a]
partitions xs = map (splitAt l) (permutations xs)
where
l = length xs `div` 2
-- All partitions filtered out by the equivalence
equivPartitions :: (Ord a) => [a] -> [Partition a]
equivPartitions = nubBy equiv . partitions
Testing
>>> equivPartitions [1,2,3,4]
[([1,2],[3,4]),([3,2],[1,4]),([3,1],[2,4])]
Note
After using QuickCheck to test the equivalence of this implementation with Daniel's, I found an important difference. Clearly, mine requires an (Ord a) constraint and his does not, and this hints at what the difference would be. In particular, if you give his [0,0,0,0], you will get a list with three copies of ([0,0],[0,0]), whereas mine will give only one copy. Which of these is correct was not specified; Daniel's is natural when considering the two output lists to be ordered sequences (which is what that type is usually considered to be), mine is natural when considering them as sets or bags (which is how this question seemed to be treating them).
Splitting The Difference
It is possible to get from an implementation that requires Ord to one that doesn't, by operating on the positions rather than the values in a list. I came up with this transformation -- an idea which I believe originates with Benjamin Pierce in his work on bidirectional programming.
import Data.Traversable
import Control.Monad.Trans.State
data Labelled a = Labelled { label :: Integer, value :: a }
instance Eq (Labelled a) where
a == b = compare a b == EQ
instance Ord (Labelled a) where
compare a b = compare (label a) (label b)
labels :: (Traversable t) => t a -> t (Labelled a)
labels t = evalState (traverse trav t) 0
where
trav x = state (\i -> i `seq` (Labelled i x, i + 1))
onIndices :: (Traversable t, Functor u)
=> (forall a. Ord a => t a -> u a)
-> forall b. t b -> u b
onIndices f = fmap value . f . labels
Using onIndices on equivPartitions wouldn't speed it up at all, but it would allow it to have the same semantics as Daniel's (up to equiv of the results) without the constraint, and with my more naive and obvious way of expressing it -- and I just thought it was an interesting way to get rid of the constraint.
My own generalized version, added much later, inspired by Will's answer:
import Data.Map (adjust, fromList, toList)
import Data.List (groupBy, sort)
divide xs n evenly = divide' xs (zip [0..] (replicate n [])) where
evenPSize = div (length xs) n
divide' [] result = [result]
divide' (x:xs) result = do
index <- indexes
divide' xs (toList $ adjust (x :) index (fromList result)) where
notEmptyBins = filter (not . null . snd) $ result
partlyFullBins | evenly == "evenly" = map fst . filter ((<evenPSize) . length . snd) $ notEmptyBins
| otherwise = map fst notEmptyBins
indexes = partlyFullBins
++ if any (null . snd) result
then map fst . take 1 . filter (null . snd) $ result
else if null partlyFullBins
then map fst. head . groupBy (\a b -> length (snd a) == length (snd b)) . sort $ result
else []