Time complexity of :: and # (OCaml) - list

I was reading this and was wondering if :: is always more efficient than # or if only in that particular implementation of rev
let rec rev = function
| [] -> []
| h::t -> rev t # [h]
let rev l =
let aux accu = function
| [] -> accu
| h::t -> aux (h :: accu) t
For example, if I wanted to append an element on a queue, would there be a difference in these two methods:
let append x q = q # [x]
and
type 'a queue = {front:'a list; back:'a list}
let norm = function
| {front=[]; back} -> {front=List.rev back; back=[]}
| q -> q
let append x q = norm {q with back=x::q.back}

# is O(n) in the size of its left operand. :: is O(1) and List.rev (as defined in the standard library) is O(n).
So if you apply # n times, you get an O(n^2) algorithm. But if you apply :: n times, that's only O(n) and if you then reverse the result once at the end, it's still O(n). This is true in general and for that reason any algorithm that appends to the end of a list multiple times, should instead prepend to the beginning of the list multiple times and then reverse the result.
However your example is different. Here you're replacing one single use of # with one possible use of rev. Since both are O(n), you end up with the same complexity in the case where you use rev.
However the case where you use rev won't happen a lot, so the complexity of enqueuing n should end up amortized O(n) (and if you don't dequeue anything in between, it's just plain O(n)). Whereas your version using # would lead to O(n^2).

I was reading this and was wondering if :: is always more efficient than #
Basically yes, but the question is blurry enough to find very special cases where both are as efficient.
The efficiency or the complexity of an operation is typically expressed using an asymptotic equivalent of the computing cost of this operation in terms of the size of its input. That said we can compare precisely the complexity of :: and # by stating:
The complexity of computing x :: lst is O(1), that is, it is bounded by a constant cost independant of the inputs.
The complexity of computing a # b is O(List.length a).
(The notation used is called big-O notation or Landau notation and should be described in most computer science textbooks.)
For example, if I wanted to append an element on a queue, would there be a difference in these two methods:
These two methods have equivalent complexity, running in O(length q). The complexity of the second operation is carried by the List.rev operation.

Related

Sort a list of tuples by their second element without higher order functions or recursion

I have a list of (String, Int) pairs and am struggling to figure out how to sort the list by the snd field (Int). I am not allowed to use higher order functions or recursion which makes it more difficult.
For example, I have
[("aaaaa", 5),("bghdfe", 6),("dddr",4)]
and would like to sort it into
[("dddr",4),("aaaaa", 5),("bghdfe", 6)].
Edit:
I understand that the sort may not be possible without higher order functions, what I really need is to find the element with the minimum length (the snd field), so is there a way to find the minimum number and then take the fst field of the list's element at that index? If that way works better I am unsure about how to find the index of that minimum number however.
The task seems to be impossible, since you can't write a sort without recursion in Haskell. This means, you must use sort, which is usually something like sortBy compare and thus you have it.
But if you are allowed to use sort you can do it by first reversing all tuples, sorting the resulting list and reversing the tuples in the result again. This should be possible to do in a few nested list comprehension, so technically no higher order functions are needed.
After you have given more details, I'd do
homework list = snd (minimum [ (s,f) | (f,s) <- list ])
Without higher order functions or recursion all you have left, technically, is list comprehensions. Thus we define
-- sortBy (comparing snd) >>> take 1 >>> listToMaybe >>> fmap fst
-- ~= minimumBy (comparing snd) >>> fst
foo :: Ord b => [(a,b)] -> Maybe a
foo xs = case [ a | (a,b) <- xs
, null [ () | (_c,d) <- xs, d < b]]
of (a:_) -> Just a
[] -> Nothing

How recursion met the base case Haskell

I am trying to understand this piece of code which returns the all possible combinations of [a] passed to it:
-- Infinite list of all combinations for a given value domain
allCombinations :: [a] -> [[a]]
allCombinations [] = [[]]
allCombinations values = [] : concatMap (\w -> map (:w) values)
(allCombinations values)
Here i tried this sample input:
ghci> take 7 (allCombinations [True,False])
[[],[True],[False],[True,True],[False,True],[True,False],[False,False]]
Here it doesn't seems understandable to me which is that how the recursion will eventually stops and will return [ [ ] ], because allCombinations function certainly doesn't have any pointer which moves through the list, on each call and when it meets the base case [ ] it returns [ [ ] ]. According to me It will call allCombinations function infinite and will never stop on its own. Or may be i am missing something?
On the other hand, take only returns the first 7 elements from the final list after all calculation is carried out by going back after completing recursive calls. So actually how recursion met the base case here?
Secondly what is the purpose of concatMap here, here we could also use Map function here just to apply function to the list and inside function we could arrange the list? What is actually concatMap doing here. From definition it concatMap tells us it first map the function then concatenate the lists where as i see we are already doing that inside the function here?
Any valuable input would be appreciated?
Short answer: it will never meet the base case.
However, it does not need to. The base case is most often needed to stop a recursion, however here you want to return an infinite list, so no need to stop it.
On the other hand, this function would break if you try to take more than 1 element of allCombination [] -- have a look at #robin's answer to understand better why. That is the only reason you see a base case here.
The way the main function works is that it starts with an empty list, and then append at the beginning each element in the argument list. (:w) does that recursively. However, this lambda alone would return an infinitely nested list. I.e: [],[[True],[False]],[[[True,True],[True,False] etc. Concatmap removes the outer list at each step, and as it is called recursively this only returns one list of lists at the end. This can be a complicated concept to grasp so look for other example of the use of concatMap and try to understand how they work and why map alone wouldn't be enough.
This obviously only works because of Haskell lazy evaluation. Similarly, you know in a foldr you need to pass it the base case, however when your function is supposed to only take infinite lists, you can have undefined as the base case to make it more clear that finite lists should not be used. For example, foldr f undefined could be used instead of foldr f []
#Lorenzo has already explained the key point - that the recursion in fact never ends, and therefore this generates an infinite list, which you can still take any finite number of elements from because of Haskell's laziness. But I think it will be helpful to give a bit more detail about this particular function and how it works.
Firstly, the [] : at the start of the definition tells you that the first element will always be []. That of course is the one and only way to make a 0-element list from elements of values. The rest of the list is concatMap (\w -> map (:w) values) (allCombinations values).
concatMap f is as you observe simply the composition concat . (map f): it applies the given function to every element of the list, and concatenates the results together. Here the function (\w -> map (:w) values) takes a list, and produces the list of lists given by prepending each element of values to that list. For example, if values == [1,2], then:
(\w -> map (:w) values) [1,2] == [[1,1,2], [2,1,2]]
if we map that function over a list of lists, such as
[[], [1], [2]]
then we get (still with values as [1,2]):
[[[1], [2]], [[1,1], [2,1]], [[1,2], [2,2]]]
That is of course a list of lists of lists - but then the concat part of concatMap comes to our rescue, flattening the outermost layer, and resulting in a list of lists as follows:
[[1], [2], [1,1], [2,1], [1,2], [2,2]]
One thing that I hope you might have noticed about this is that the list of lists I started with was not arbitrary. [[], [1], [2]] is the list of all combinations of size 0 or 1 from the starting list [1,2]. This is in fact the first three elements of allCombinations [1,2].
Recall that all we know "for sure" when looking at the definition is that the first element of this list will be []. And the rest of the list is concatMap (\w -> map (:w) [1,2]) (allCombinations [1,2]). The next step is to expand the recursive part of this as [] : concatMap (\w -> map (:w) [1,2]) (allCombinations [1,2]). The outer concatMap
then can see that the head of the list it's mapping over is [] - producing a list starting [1], [2] and continuing with the results of appending 1 and then 2 to the other elements - whatever they are. But we've just seen that the next 2 elements are in fact [1] and [2]. We end up with
allCombinations [1,2] == [] : [1] : [2] : concatMap (\w -> map (:w) values) [1,2] (tail (allCombinations [1,2]))
(tail isn't strictly called in the evaluation process, it's done by pattern-matching instead - I'm trying to explain more by words than explicit plodding through equalities).
And looking at that we know the tail is [1] : [2] : concatMap .... The key point is that, at each stage of the process, we know for sure what the first few elements of the list are - and they happen to be all 0-element lists with values taken from values, followed by all 1-element lists with these values, then all 2-element lists, and so on. Once you've got started, the process must continue, because the function passed to concatMap ensures that we just get the lists obtained from taking every list generated so far, and appending each element of values to the front of them.
If you're still confused by this, look up how to compute the Fibonacci numbers in Haskell. The classic way to get an infinite list of all Fibonacci numbers is:
fib = 1 : 1 : zipWith (+) fib (tail fib)
This is a bit easier to understand that the allCombinations example, but relies on essentially the same thing - defining a list purely in terms of itself, but using lazy evaluation to progressively generate as much of the list as you want, according to a simple rule.
It is not a base case but a special case, and this is not recursion but corecursion,(*) which never stops.
Maybe the following re-formulation will be easier to follow:
allCombs :: [t] -> [[t]]
-- [1,2] -> [[]] ++ [1:[],2:[]] ++ [1:[1],2:[1],1:[2],2:[2]] ++ ...
allCombs vals = concat . iterate (cons vals) $ [[]]
where
cons :: [t] -> [[t]] -> [[t]]
cons vals combs = concat [ [v : comb | v <- vals]
| comb <- combs ]
-- iterate :: (a -> a ) -> a -> [a]
-- cons vals :: [[t]] -> [[t]]
-- iterate (cons vals) :: [[t]] -> [[[t]]]
-- concat :: [[ a ]] -> [ a ]
-- concat . iterate (cons vals) :: [[t]]
Looks different, does the same thing. Not just produces the same results, but actually is doing the same thing to produce them.(*) The concat is the same concat, you just need to tilt your head a little to see it.
This also shows why the concat is needed here. Each step = cons vals is producing a new batch of combinations, with length increasing by 1 on each step application, and concat glues them all together into one list of results.
The length of each batch is the previous batch length multiplied by n where n is the length of vals. This also shows the need to special case the vals == [] case i.e. the n == 0 case: 0*x == 0 and so the length of each new batch is 0 and so an attempt to get one more value from the results would never produce a result, i.e. enter an infinite loop. The function is said to become non-productive, at that point.
Incidentally, cons is almost the same as
== concat [ [v : comb | comb <- combs]
| v <- vals ]
== liftA2 (:) vals combs
liftA2 :: Applicative f => (a -> b -> r) -> f a -> f b -> f r
So if the internal order of each step results is unimportant to you (but see an important caveat at the post bottom) this can just be coded as
allCombsA :: [t] -> [[t]]
-- [1,2] -> [[]] ++ [1:[],2:[]] ++ [1:[1],1:[2],2:[1],2:[2]] ++ ...
allCombsA [] = [[]]
allCombsA vals = concat . iterate (liftA2 (:) vals) $ [[]]
(*) well actually, this refers to a bit modified version of it,
allCombsRes vals = res
where res = [] : concatMap (\w -> map (: w) vals)
res
-- or:
allCombsRes vals = fix $ ([] :) . concatMap (\w -> map (: w) vals)
-- where
-- fix g = x where x = g x -- in Data.Function
Or in pseudocode:
Produce a sequence of values `res` by
FIRST producing `[]`, AND THEN
from each produced value `w` in `res`,
produce a batch of new values `[v : w | v <- vals]`
and splice them into the output sequence
(by using `concat`)
So the res list is produced corecursively, starting from its starting point, [], producing next elements of it based on previous one(s) -- either in batches, as in iterate-based version, or one-by-one as here, taking the input via a back pointer into the results previously produced (taking its output as its input, as a saying goes -- which is a bit deceptive of course, as we take it at a slower pace than we're producing it, or otherwise the process would stop being productive, as was already mentioned above).
But. Sometimes it can be advantageous to produce the input via recursive calls, creating at run time a sequence of functions, each passing its output up the chain, to its caller. Still, the dataflow is upwards, unlike regular recursion which first goes downward towards the base case.
The advantage just mentioned has to do with memory retention. The corecursive allCombsRes as if keeps a back-pointer into the sequence that it itself is producing, and so the sequence can not be garbage-collected on the fly.
But the chain of the stream-producers implicitly created by your original version at run time means each of them can be garbage-collected on the fly as n = length vals new elements are produced from each downstream element, so the overall process becomes equivalent to just k = ceiling $ logBase n i nested loops each with O(1) space state, to produce the ith element of the sequence.
This is much much better than the O(n) memory requirement of the corecursive/value-recursive allCombsRes which in effect keeps a back pointer into its output at the i/n position. And in practice a logarithmic space requirement is most likely to be seen as a more or less O(1) space requirement.
This advantage only happens with the order of generation as in your version, i.e. as in cons vals, not liftA2 (:) vals which has to go back to the start of its input sequence combs (for each new v in vals) which thus must be preserved, so we can safely say that the formulation in your question is rather ingenious.
And if we're after a pointfree re-formulation -- as pointfree can at times be illuminating -- it is
allCombsY values = _Y $ ([] :) . concatMap (\w -> map (: w) values)
where
_Y g = g (_Y g) -- no-sharing fixpoint combinator
So the code is much easier understood in a fix-using formulation, and then we just switch fix with the semantically equivalent _Y, for efficiency, getting the (equivalent of the) original code from the question.
The above claims about space requirements behavior are easily tested. I haven't done so, yet.
See also:
Why does GHC make fix so confounding?
Sharing vs. non-sharing fixed-point combinator

Haskell: Best way to search a list of large size

I have a list of 100K+ elements, this is a finite list. Currently I am using the Data.List function elem. When looking at the Data.List information page there is also find and filter. Would one of these be faster than the elem function?
Just in case we haven't beat the dead horse quite enough...
There is a huge performance difference with different set representations. As an example (which might or might not match your use case) consider taking a list of 200K random elements and a computation to determine the membership of 200 random elements.
I've implemented three obvious ways to do this - using elem over the list, converting to a HashSet and checking for membership, and performing a hybrid of Bloom Filters and a Hash Set. The benchmark shows the list solution is 3 orders of magnitude slower than the hash set, which is about 2x slower than the hybrid.
benchmarking list
mean: 460.7106 ms, lb 459.2952 ms, ub 462.8491 ms, ci 0.950
std dev: 8.741096 ms, lb 6.293703 ms, ub 12.23082 ms, ci 0.950
benchmarking hashset
mean: 175.2730 us, lb 173.9140 us, ub 177.0802 us, ci 0.950
std dev: 7.966790 us, lb 6.391454 us, ub 10.25774 us, ci 0.950
benchmarking bloom+hashset
mean: 88.22402 us, lb 87.35856 us, ub 89.66884 us, ci 0.950
std dev: 5.642663 us, lb 3.793715 us, ub 8.264222 us, ci 0.950
And the code:
import qualified Data.HashSet as Set
import Data.HashSet (Set)
import qualified Data.BloomFilter as BF
import qualified Data.BloomFilter.Easy as BF
import Data.BloomFilter (Bloom)
import Data.BloomFilter.Hash as H2
import Data.Hashable as H1
import Criterion.Main
import System.Random
data MySet a = MS (Set a) (Bloom a)
fromList :: (H2.Hashable a, H1.Hashable a, Ord a) => [a] -> MySet a
fromList as =
let hs = Set.fromList as
bf = BF.easyList 0.2 as
in hs `seq` bf `seq` MS hs bf
member :: (H2.Hashable a, H1.Hashable a, Ord a) => a -> MySet a -> Bool
member e (MS hs bf)
| BF.elemB e bf = Set.member e hs
| otherwise = False
main = do
list <- take 200000 `fmap` randomsIO :: IO [Int]
xs <- take 200 `fmap` randomsIO
let hs = Set.fromList list
bhs = fromList list
defaultMain
[ bench "list" $ nf (map (`elem` list)) xs
, bench "hashset" $ nf (map (`Set.member` hs)) xs
, bench "bloom+hashset" $ nf (map (`member` bhs)) xs
]
randomsIO = randoms `fmap` newStdGen
In each case you're going to require a linear traversal of the list. If you're going to be checking for containment repeatedly you should change to a more efficient structure. If you just need to do a single lookup then O(n) worst case is the best you can get—just look for your element as you create them.
If your types are ordered (instantiate Ord) then you should use Set from the containers package (it's part of the Haskell Platform).
import qualified Data.Set as Set
mySet :: Set.Set Elems
mySet = Set.fromList bigList -- expensive, eventually requires a 1 linear traversal
-- cheaper!
checkElems :: [Elem] -> Set.Set Elems -> [Bool]
checkElems es s = map (\e -> Set.member e s) es
If Ord isn't possible, you may be able to using hashing instead via the data structures in unordered-containers. In that package we have Data.HashSet which is effectively identical to Data.Set except it requires the (sometimes more liberal, sometimes faster) Hashable instance instead of Ord.
If your Elem type is actually Int then Data.IntSet is also a great choice.
Finally, it's worth noting that while Set is an optimized structure for checking membership, it does throw away repeats. If repeats are valuable you will want to examine other data types or some kinds of preprocessing. Sets with repeats are often called Bags and can be simulated (with similar performance characteristics) by using the Data.Map, Data.HashMap, and Data.IntMap modules. In this case, you store your list as a Data.Map.Map Elem Count and check for membership by seeing if a particular key is being used in the result map.
Let's look at the definitions:
elem :: Eq a => a -> [a] -> Bool
elem _ [] = False
elem x (y:ys) = x == y || elem x ys
find :: (a -> Bool) -> [a] -> Maybe a
find p = listToMaybe . filter p
filter :: (a -> Bool) -> [a] -> [a]
filter p [] = []
filter p (x:xs) = if p x then x : filter p xs else filter p xs
Quite clearly, find and filter have the same complexity. The elem function has the same basic recursion pattern as filter, so it also has the same complexity. So really, it doesn't matter much which one you use, all of these have worst case O(n) complexity. If you're just testing membership, then elem should be your function of choice. If you're doing a lot more than just that, you might want to consider switching to a Vector, Set, or other data structure better optimized for what you're doing. Lists in Haskell are great for nondeterminism and working with small amounts of data, but when you have a significant number of data points their inefficiencies become very noticeable.
For that many elements, you probably want to use a data structure that does sub-linear searching. My go-to library for Haskell data structure is Edison. The GHC includes Data.Set which will still be sub-linear and the platform has unordered-containers and it is supposed to be quite fast.
They do different things. filter simply removes elements according to a predicate. This is going to be slower then elem in all cases since it must traverse the entire list and check the predicate even if your element is at the head of the list.
find is just going to return an element, so it's going to be identical in performance for all intents and purposes.
So elem/find is probably around local maximum for efficiency of searching a list. But it's quite a pitiful local maximum.
On the other hand, if you're manipulating a lot of data, [] is probably the wrong choice. It's absolutely horrible from a cache perspective, and almost all operations are O(n). It is after all, just a dumb singly-linked list. If you're doing a lot of membership checks, consider switching to Data.Set, a very painless transition.

Grouping a list into lists of n elements in Haskell

Is there an operation on lists in library that makes groups of n elements? For example: n=3
groupInto 3 [1,2,3,4,5,6,7,8,9] = [[1,2,3],[4,5,6],[7,8,9]]
If not, how do I do it?
A quick search on Hoogle showed that there is no such function. On the other hand, it was replied that there is one in the split package, called chunksOf.
However, you can do it on your own
group :: Int -> [a] -> [[a]]
group _ [] = []
group n l
| n > 0 = (take n l) : (group n (drop n l))
| otherwise = error "Negative or zero n"
Of course, some parentheses can be removed, I left there here for understanding what the code does:
The base case is simple: whenever the list is empty, simply return the empty list.
The recursive case tests first if n is positive. If n is 0 or lower we would enter an infinite loop and we don't want that. Then we split the list into two parts using take and drop: take gives back the first n elements while drop returns the other ones. Then, we add the first n elements to the list obtained by applying our function to the other elements in the original list.
This function, among other similar ones, can be found in the popular split package.
> import Data.List.Split
> chunksOf 3 [1,2,3,4,5,6,7,8,9]
[[1,2,3],[4,5,6],[7,8,9]]
You can write one yourself, as Mihai pointed out. But I would use the splitAt function since it doesn't require two passes on the input list like the take-drop combination does:
chunks :: Int -> [a] -> [[a]]
chunks _ [] = []
chunks n xs =
let (ys, zs) = splitAt n xs
in ys : chunks n zs
This is a common pattern - generating a list from a seed value (which in this case is your input list) by repeated iteration. This pattern is captured in the unfoldr function. We can use it with a slightly modified version of splitAt (thanks Will Ness for the more concise version):
chunks n = takeWhile (not . null) . unfoldr (Just . splitAt n)
That is, using unfoldr we generate chunks of n elements while at the same time we shorten the input list by n elements, and we generate these chunks until we get the empty list -- at this point the initial input is completely consumed.
Of course, as the others have pointed out, you should use the already existing function from the split module. But it's always good to accustom yourself with the list processing functions in the standard Haskell libraries.
This is ofte called "chunk" and is one of the most frequently mentioned list operations that is not in base. The package split provides such an operation though, copy and pasting the haddock documentation:
> chunksOf 3 ['a'..'z']
["abc","def","ghi","jkl","mno","pqr","stu","vwx","yz"]
Additionally, against my wishes, hoogle only searches a small set of libraries (those provided with GHC or perhaps HP), but you can explicitly add packages to the search using +PKG_NAME - hoogle with Int -> [a] -> [[a]] +split gets what you want. Some people use Hayoo for this reason.

Erlang Iterating through list removing one element

I have the following erlang code:
lists:all(fun(Element) -> somefunction(TestCase -- [Element]) end, TestCase).
Where TestCase is an array. I'm trying to iterate over the list/array with one element missing.
The problem is this code takes O(N^2) time worst case because of the copies of the TestCase array everytime -- is called. There is a clear O(N) Solution in a non functional language.
saved = TestCase[0]
temp = 0
NewTestCase = TestCase[1:]
for a in range(length(NewTestCase)):
somefunction(NewTestCase)
temp = NewTestCase[a]
NewTestCase[a] = saved
saved = temp
... or something like that.
Is there an O(N) solution in erlang?
Of course there is, but it's a little bit more complicated. I am assuming that some_function/1 is indeed a boolean function and you want to test whether it returns true for every sub-list.
test_on_all_but_one([], _Acc) -> true;
test_on_all_but_one([E|Rest], Acc) ->
case somefunction(lists:reverse(Acc,Rest)) of
true -> test_on_all_but_one(Rest, [E|Acc]);
false -> false
end.
This implementation is still O(length(List)^2) as the lists:reverse/2 call will still need O(length(Acc)). If you can modify somefunction/1 to do it's calculation on a list split into two parts, then you can modify the previous call to somefunction(lists:reverse(Acc,Rest)) with somefunction(Acc, Rest) or something similar and avoid the reconstruction.
The modification depends on the inner workings of somefunction/1. If you want more help with that, give some code!
You can split the list into 2 sublists, if it's acceptable of course.
witerate(Fun, [Tail], Acc) ->
Fun([], Acc);
witerate(Fun, [Head | Tail], Acc) ->
Fun(Tail, Acc),
witerate(Fun, Tail, [Head | Acc]).