I'm trying to learn Erlang using the Karate Chop Kata. I translated the runit test supplied in the kata to an eunit test and coded up a small function to perform the task at hand.
-module(chop).
-export([chop/2]).
-import(lists).
-include_lib("eunit/include/eunit.hrl").
-ifdef(TEST).
chop_test_() -> [
?_assertMatch(-1, chop(3, [])),
?_assertMatch(-1, chop(3, [1])),
?_assertMatch(0, chop(1, [1])),
....several asserts deleted for brevity...
].
-endif.
chop(N,L) -> chop(N,L,0);
chop(_,[]) -> -1.
chop(_, [],_) -> -1;
chop(N, L, M) ->
MidIndex = length(L) div 2,
MidPoint = lists:nth(MidIndex,L),
{Left,Right} = lists:split(MidIndex,L),
case MidPoint of
_ when MidPoint < N -> chop(N,Right,M+MidIndex);
_ when MidPoint =:= N -> M+MidIndex;
_ when MidPoint > N -> chop(N,Left,M)
end.
Compiles ok.Running the test however gives, (amongst others) the following failure:
::error:badarg
in function erlang:length/1
called as length(1)
in call from chop:chop/3
I've tried different permutations of declaring chop(N,[L],M) .... and using length([L]) but have not been able to resolve this issue. Any suggestions are welcome.
ps. As you might have guessed I'm a nube when it comes to Erlang.
So I'm pressed for time at the moment, but the first problem I see is that
chop(N,L) -> chop(N,L,0);
chop(_,[]) -> -1.
is wrong because chop(N,L) will always match. reverse the clauses and see where that gets you.
Beyond that, in the case of the 1 element list, nth(0, [1]) will fail. I feel like these lists are probably 1-indexed.
As most significant thing to learn you should realize, that using binary search for lists in erlang is wrong idea, because lists:nth/2 is not O(1) but O(N) operation. Try list_to_tuple/1 and than do it on tuple. It is much more worth work.
It can also be worth to try it on array module.
The function erlang:length/1 returns the length of a list.
You called length(1) and 1 isn't a list.
length([1]) would return 1
length([1,2,3,4[) would return 4
etc, etc...
It appears that combining the remarks from Ben Hughes solves the problem. Just for completeness I'm pasting the tests-passing implementation of my binary search below.
chop(_,[]) -> -1;
chop(N,L) ->
Array = array:from_list(L),
chop(N,Array, 0, array:size(Array)-1).
chop(N, L, K, K) ->
Element = array:get(K,L),
if
Element == N -> K;
true -> -1
end;
chop(_, _, K, M) when M < K -> -1;
chop(N, L, K, M) ->
MidIndex = K + ((M - K) div 2),
MidPoint = array:get(MidIndex,L),
case MidPoint of
N -> MidIndex;
_ when MidPoint < N -> chop(N,L,MidIndex+1,M);
_ -> chop(N,L,K,MidIndex-1)
end.
Related
I am trying to write a program in sml that takes in the length of a list, the max number that will appear on the list and the list of course. It then calculates the length of the smallest "sub-list" that contains all numbers.
I have tried to use the sliding window approach , with two indexes , front and tail. The front scans first and when it finds a number it writes into a map how many times it has already seen this number. If the program finds all numbers then it calls the tail. The tail scans the list and if it finds that a number has been seen more times than 1 it takes it off.
The code I have tried so far is the following:
structure Key=
struct
type ord_key=int
val compare=Int.compare
end
fun min x y = if x>y then y else x;
structure mymap = BinaryMapFn ( Key );
fun smallest_sub(n,t,listall,map)=
let
val k=0
val front=0
val tail=0
val minimum= n;
val list1=listall;
val list2=listall;
fun increase(list1,front,k,ourmap)=
let
val number= hd list1
val elem=mymap.find(ourmap,number)
val per=getOpt(elem,0)+1
fun decrease(list2,tail,k,ourmap,minimum)=
let
val number=hd list2
val elem=mymap.find(ourmap,number)
val per=getOpt(elem,0)-1
val per1=getOpt(elem,0)
in
if k>t then
if (per1=1) then decrease(tl list2,tail+1,k-1,mymap.insert(ourmap,number,per),min minimum (front-tail))
else decrease(tl list2,tail+1,k,mymap.insert(ourmap,number,per),min minimum (front-tail))
else increase (list1, front,k,ourmap)
end
in
if t>k then
if (elem<>NONE) then increase (tl list1,front+1,k,mymap.insert(ourmap,number,per))
else increase(tl list1,front+1,k+1,mymap.insert(ourmap,number,per))
else (if (n>front) then decrease(list2,tail,k,ourmap,minimum) else minimum)
end
in
increase(list1,front,k,map)
end
fun solve (n,t,acc)= smallest_sub(n,t,acc,mymap.empty)
But when I call it with this smallest_sub(10,3,[1,3,1,3,1,3,3,2,2,1]); it does not work. What have I done wrong??
Example: if input is 1,3,1,3,1,3,3,2,2,1 the program should recognize that the parto of the list that contains all numbers and is the smallest is 1,3,3,2 and 3,2,2,1 so the output should be 4
This problem of "smallest sub-list that contains all values" seems to recur in
new questions without a successful answer. This is because it's not a minimal,
complete, and verifiable example.
Because you use a "sliding window" approach, indexing the front and the back
of your input, a list taking O(n) time to index elements is not ideal. You
really do want to use arrays here. If your input function must have a list, you
can convert it to an array for the purpose of the algorithm.
I'd like to perform a cleanup of the code before answering, because running
your current code by hand is a bit hard because it's so condensed. Here's an
example of how you could abstract out the book-keeping of whether a given
sub-list contains at least one copy of each element in the original list:
Edit: I changed the code below after originally posting it.
structure CountMap = struct
structure IntMap = BinaryMapFn(struct
type ord_key = int
val compare = Int.compare
end)
fun count (m, x) =
Option.getOpt (IntMap.find (m, x), 0)
fun increment (m, x) =
IntMap.insert (m, x, count (m, x) + 1)
fun decrement (m, x) =
let val c' = count (m, x)
in if c' <= 1
then NONE
else SOME (IntMap.insert (m, x, c' - 1))
end
fun flip f (x, y) = f (y, x)
val fromList = List.foldl (flip increment) IntMap.empty
end
That is, a CountMap is an int IntMap.map where the Int represents the
fixed key type of the map, being int, and the int parameter in front of it
represents the value type of the map, being a count of how many times this
value occurred.
When building the initialCountMap below, you use CountMap.increment, and
when you use the "sliding window" approach, you use CountMap.decrement to
produce a new countMap that you can test on recursively.
If you decrement the occurrence below 1, you're looking at a sub-list that
doesn't contain every element at least once; we rule out any solution by
letting CountMap.decrement return NONE.
With all of this machinery abstracted out, the algorithm itself becomes much
easier to express. First, I'd like to convert the list to an array so that
indexing becomes O(1), because we'll be doing a lot of indexing.
fun smallest_sublist_length [] = 0
| smallest_sublist_length (xs : int list) =
let val arr = Array.fromList xs
val initialCountMap = CountMap.fromList xs
fun go countMap i j =
let val xi = Array.sub (arr, i)
val xj = Array.sub (arr, j)
val decrementLeft = CountMap.decrement (countMap, xi)
val decrementRight = CountMap.decrement (countMap, xj)
in
case (decrementLeft, decrementRight) of
(SOME leftCountMap, SOME rightCountMap) =>
Int.min (
go leftCountMap (i+1) j,
go rightCountMap i (j-1)
)
| (SOME leftCountMap, NONE) => go leftCountMap (i+1) j
| (NONE, SOME rightCountMap) => go rightCountMap i (j-1)
| (NONE, NONE) => j - i + 1
end
in
go initialCountMap 0 (Array.length arr - 1)
end
This appears to work, but...
Doing Int.min (go left..., go right...) incurs a cost of O(n^2) stack
memory (in the case where you cannot rule out either being optimal). This is a
good use-case for dynamic programming because your recursive sub-problems have a
common sub-structure, i.e.
go initialCountMap 0 10
|- go leftCountMap 1 10
| |- ...
| `- go rightCountMap 1 9 <-.
`- go rightCountMap 0 9 | possibly same sub-problem!
|- go leftCountMap 1 9 <-'
`- ...
So maybe there's a way to store the recursive sub-problem in a memory array and not
perform a recursive lookup if you know the result to this sub-problem. How to
do memoization in SML is a good question in and of itself. How to do purely
functional memoization in a non-lazy language is an even better one.
Another optimization you could make is that if you ever find a sub-list the
size of the number of unique elements, you need to look no further. This number
is incidentally the number of elements in initialCountMap, and IntMap
probably has a function for finding it.
TL;DR: I want the exact behavior as filter ((== 4) . length) . subsequences. Just using subsequences also creates variable length of lists, which takes a lot of time to process. Since in the end only lists of length 4 are needed, I was thinking there must be a faster way.
I have a list of functions. The list has the type [Wor -> Wor]
The list looks something like this
[f1, f2, f3 .. fn]
What I want is a list of lists of n functions while preserving order like this
input : [f1, f2, f3 .. fn]
argument : 4 functions
output : A list of lists of 4 functions.
Expected output would be where if there's an f1 in the sublist, it'll always be at the head of the list.
If there's a f2 in the sublist and if the sublist doens't have f1, f2 would be at head. If fn is in the sublist, it'll be at last.
In general if there's a fx in the list, it never will be infront of f(x - 1) .
Basically preserving the main list's order when generating sublists.
It can be assumed that length of list will always be greater then given argument.
I'm just starting to learn Haskell so I haven't tried all that much but so far this is what I have tried is this:
Generation permutations with subsequences function and applying (filter (== 4) . length) on it seems to generate correct permutations -but it doesn't preserve order- (It preserves order, I was confusing it with my own function).
So what should I do?
Also if possible, is there a function or a combination of functions present in Hackage or Stackage which can do this? Because I would like to understand the source.
You describe a nondeterministic take:
ndtake :: Int -> [a] -> [[a]]
ndtake 0 _ = [[]]
ndtake n [] = []
ndtake n (x:xs) = map (x:) (ndtake (n-1) xs) ++ ndtake n xs
Either we take an x, and have n-1 more to take from xs; or we don't take the x and have n more elements to take from xs.
Running:
> ndtake 3 [1..4]
[[1,2,3],[1,2,4],[1,3,4],[2,3,4]]
Update: you wanted efficiency. If we're sure the input list is finite, we can aim at stopping as soon as possible:
ndetake n xs = go (length xs) n xs
where
go spare n _ | n > spare = []
go spare n xs | n == spare = [xs]
go spare 0 _ = [[]]
go spare n [] = []
go spare n (x:xs) = map (x:) (go (spare-1) (n-1) xs)
++ go (spare-1) n xs
Trying it:
> length $ ndetake 443 [1..444]
444
The former version seems to be stuck on this input, but the latter one returns immediately.
But, it measures the length of the whole list, and needlessly so, as pointed out by #dfeuer in the comments. We can achieve the same improvement in efficiency while retaining a bit more laziness:
ndzetake :: Int -> [a] -> [[a]]
ndzetake n xs | n > 0 =
go n (length (take n xs) == n) (drop n xs) xs
where
go n b p ~(x:xs)
| n == 0 = [[]]
| not b = []
| null p = [(x:xs)]
| otherwise = map (x:) (go (n-1) b p xs)
++ go n b (tail p) xs
Now the last test also works instantly with this code as well.
There's still room for improvement here. Just as with the library function subsequences, the search space could be explored even more lazily. Right now we have
> take 9 $ ndzetake 3 [1..]
[[1,2,3],[1,2,4],[1,2,5],[1,2,6],[1,2,7],[1,2,8],[1,2,9],[1,2,10],[1,2,11]]
but it could be finding [2,3,4] before forcing the 5 out of the input list. Shall we leave it as an exercise?
Here's the best I've been able to come up with. It answers the challenge Will Ness laid down to be as lazy as possible in the input. In particular, ndtake m ([1..n]++undefined) will produce as many entries as possible before throwing an exception. Furthermore, it strives to maximize sharing among the result lists (note the treatment of end in ndtakeEnding'). It avoids problems with badly balanced list appends using a difference list. This sequence-based version is considerably faster than any pure-list version I've come up with, but I haven't teased apart just why that is. I have the feeling it may be possible to do even better with a better understanding of just what's going on, but this seems to work pretty well.
Here's the general idea. Suppose we ask for ndtake 3 [1..5]. We first produce all the results ending in 3 (of which there is one). Then we produce all the results ending in 4. We do this by (essentially) calling ndtake 2 [1..3] and adding the 4 onto each result. We continue in this manner until we have no more elements.
import qualified Data.Sequence as S
import Data.Sequence (Seq, (|>))
import Data.Foldable (toList)
We will use the following simple utility function. It's almost the same as splitAtExactMay from the 'safe' package, but hopefully a bit easier to understand. For reasons I haven't investigated, letting this produce a result when its argument is negative leads to ndtake with a negative argument being equivalent to subsequences. If you want, you can easily change ndtake to do something else for negative arguments.
-- to return an empty list in the negative case.
splitAtMay :: Int -> [a] -> Maybe ([a], [a])
splitAtMay n xs
| n <= 0 = Just ([], xs)
splitAtMay _ [] = Nothing
splitAtMay n (x : xs) = flip fmap (splitAtMay (n - 1) xs) $
\(front, rear) -> (x : front, rear)
Now we really get started. ndtake is implemented using ndtakeEnding, which produces a sort of "difference list", allowing all the partial results to be concatenated cheaply.
ndtake :: Int -> [t] -> [[t]]
ndtake n xs = ndtakeEnding n xs []
ndtakeEnding :: Int -> [t] -> ([[t]] -> [[t]])
ndtakeEnding 0 _xs = ([]:)
ndtakeEnding n xs = case splitAtMay n xs of
Nothing -> id -- Not enough elements
Just (front, rear) ->
(front :) . go rear (S.fromList front)
where
-- For each element, produce a list of all combinations
-- *ending* with that element.
go [] _front = id
go (r : rs) front =
ndtakeEnding' [r] (n - 1) front
. go rs (front |> r)
ndtakeEnding doesn't call itself recursively. Rather, it calls ndtakeEnding' to calculate the combinations of the front part. ndtakeEnding' is very much like ndtakeEnding, but with a few differences:
We use a Seq rather than a list to represent the input sequence. This lets us split and snoc cheaply, but I'm not yet sure why that seems to give amortized performance that is so much better in this case.
We already know that the input sequence is long enough, so we don't need to check.
We're passed a tail (end) to add to each result. This lets us share tails when possible. There are lots of opportunities for sharing tails, so this can be expected to be a substantial optimization.
We use foldr rather than pattern matching. Doing this manually with pattern matching gives clearer code, but worse constant factors. That's because the :<|, and :|> patterns exported from Data.Sequence are non-trivial pattern synonyms that perform a bit of calculation, including amortized O(1) allocation, to build the tail or initial segment, whereas folds don't need to build those.
NB: this implementation of ndtakeEnding' works well for recent GHC and containers; it seems less efficient for earlier versions. That might be the work of Donnacha Kidney on foldr for Data.Sequence. In earlier versions, it might be more efficient to pattern match by hand, using viewl for versions that don't offer the pattern synonyms.
ndtakeEnding' :: [t] -> Int -> Seq t -> ([[t]] -> [[t]])
ndtakeEnding' end 0 _xs = (end:)
ndtakeEnding' end n xs = case S.splitAt n xs of
(front, rear) ->
((toList front ++ end) :) . go rear front
where
go = foldr go' (const id) where
go' r k !front = ndtakeEnding' (r : end) (n - 1) front . k (front |> r)
-- With patterns, a bit less efficiently:
-- go Empty _front = id
-- go (r :<| rs) !front =
-- ndtakeEnding' (r : end) (n - 1) front
-- . go rs (front :|> r)
I am new to OCaml and functional programming as a whole. I am working on a part of an assignment where I must simply return the first n elements of a list. I am not allowed to use List.Length.
I feel that what I have written is probably overly complicated for what I'm trying to accomplish. What my code attempts to do is concatenate the front of the list to the end until n is decremented to 1. At which point the head moves a further n-1 spots to that the tail of the list and then return the tail. Again, I realize that there is probably a much simpler way to do this, but I am stumped and probably showing my inability to grasp functional programming.
let rec take n l =
let stopNum = 0 - (n - 1) in
let rec subList n lst =
match lst with
| hd::tl -> if n = stopNum then (tl)
else if (0 - n) = 0 then (subList (n - 1 ) tl )
else subList (n - 1) (tl # [hd])
| [] -> [] ;;
My compiler tells me that I have a syntax error on the last line. I get the same result regardless of whether "| [] -> []" is the last line or the one above it. The syntax error does not exist when I take out the nested subList let. Clearly there is something about nested lets that I am just not understanding.
Thanks.
let rec firstk k xs = match xs with
| [] -> failwith "firstk"
| x::xs -> if k=1 then [x] else x::firstk (k-1) xs;;
You might have been looking for this one.
What you have to do here, is to iterate on your initial list l and then add elements of this list in an accumulator until n is 0.
let take n l =
let rec sub_list n accu l =
match l with
| [] -> accu (* here the list is now empty, return the partial result *)
| hd :: tl ->
if n = 0 then accu (* if you reach your limit, return your result *)
else (* make the call to the recursive sub_list function:
- decrement n,
- add hd to the accumulator,
- call with the rest of the list (tl)*)
in
sub_list n [] l
Since you're just starting with FP, I suggest you look for the simplest and most elegant solution. What you're looking for is a way to solve the problem for n by building it up from a solution for a smaller problem.
So the key question is: how could you produce the first n elements of your list if you already had a function that could produce the first (n - 1) elements of a list?
Then you need to solve the "base" cases, the cases that are so simple that the answer is obvious. For this problem I'd say there are two base cases: when n is 0, the answer is obvious; when the list is empty, the answer is obvious.
If you work this through you get a fairly elegant definition.
Scenario:
If there is an array of integers and I want to get array of integers in return that their total should not exceed 10.
I am a beginner in Haskell and tried below. If any one could correct me, would be greatly appreciated.
numbers :: [Int]
numbers = [1,2,3,4,5,6,7,8,9,10, 11, 12]
getUpTo :: [Int] -> Int -> [Int]
getUpTo (x:xs) max =
if max <= 10
then
max = max + x
getUpTo xs max
else
x
Input
getUpTo numbers 0
Output Expected
[1,2,3,4]
BEWARE: This is not a solution to the knapsack problem :)
A very fast solution I came up with is the following one. Of course solving the full knapsack problem would be harder, but if you only need a quick solution this should work:
import Data.List (sort)
getUpTo :: Int -> [Int] -> [Int]
getUpTo max xs = go (sort xs) 0 []
where
go [] sum acc = acc
go (x:xs) sum acc
| x + sum <= max = go xs (x + sum) (x:acc)
| otherwise = acc
By sorting out the array before everything else, I can take items from the top one after another, until the maximum is exceeded; the list built up to that point is then returned.
edit: as a side note, I swapped the order of the first two arguments because this way should be more useful for partial applications.
For educational purposes (and since I felt like explaining something :-), here's a different version, which uses more standard functions. As written it is slower, because it computes a number of sums, and doesn't keep a running total. On the other hand, I think it expresses quite well how to break the problem down.
getUpTo :: [Int] -> [Int]
getUpTo = last . filter (\xs -> sum xs <= 10) . Data.List.inits
I've written the solution as a 'pipeline' of functions; if you apply getUpTo to a list of numbers, Data.List.inits gets applied to the list first, then filter (\xs -> sum xs <= 10) gets applied to the result, and finally last gets applied to the result of that.
So, let's see what each of those three functions do. First off, Data.List.inits returns the initial segments of a list, in increasing order of length. For example, Data.List.inits [2,3,4,5,6] returns [[],[2],[2,3],[2,3,4],[2,3,4,5],[2,3,4,5,6]]. As you can see, this is a list of lists of integers.
Next up, filter (\xs -> sum xs <= 10) goes through these lists of integer in order, keeping them if their sum is less than 10, and discarding them otherwise. The first argument of filter is a predicate which given a list xs returns True if the sum of xs is less than 10. This may be a bit confusing at first, so an example with a simpler predicate is in order, I think. filter even [1,2,3,4,5,6,7] returns [2,4,6] because that are the even values in the original list. In the earlier example, the lists [], [2], [2,3], and [2,3,4] all have a sum less than 10, but [2,3,4,5] and [2,3,4,5,6] don't, so the result of filter (\xs -> sum xs <= 10) . Data.List.inits applied to [2,3,4,5,6] is [[],[2],[2,3],[2,3,4]], again a list of lists of integers.
The last step is the easiest: we just return the last element of the list of lists of integers. This is in principle unsafe, because what should the last element of an empty list be? In our case, we are good to go, since inits always returns the empty list [] first, which has sum 0, which is less than ten - so there's always at least one element in the list of lists we're taking the last element of. We apply last to a list which contains the initial segments of the original list which sum to less than 10, ordered by length. In other words: we return the longest initial segment which sums to less than 10 - which is what you wanted!
If there are negative numbers in your numbers list, this way of doing things can return something you don't expect: getUpTo [10,4,-5,20] returns [10,4,-5] because that is the longest initial segment of [10,4,-5,20] which sums to under 10; even though [10,4] is above 10. If this is not the behaviour you want, and expect [10], then you must replace filter by takeWhile - that essentially stops the filtering as soon as the first element for which the predicate returns False is encountered. E.g. takeWhile [2,4,1,3,6,8,5,7] evaluates to [2,4]. So in our case, using takeWhile stops the moment the sum goes over 10, not trying longer segments.
By writing getUpTo as a composition of functions, it becomes easy to change parts of your algorithm: if you want the longest initial segment that sums exactly to 10, you can use last . filter (\xs -> sum xs == 10) . Data.List.inits. Or if you want to look at the tail segments instead, use head . filter (\xs -> sum xs <= 10) . Data.List.tails; or to take all the possible sublists into account (i.e. an inefficient knapsack solution!): last . filter (\xs -> sum xs <= 10) . Data.List.sortBy (\xs ys -> length xscomparelength ys) . Control.Monad.filterM (const [False,True]) - but I'm not going to explain that here, I've been rambling long enough!
There is an answer with a fast version; however, I thought it might also be instructive to see the minimal change necessary to your code to make it work the way you expect.
numbers :: [Int]
numbers = [1,2,3,4,5,6,7,8,9,10, 11, 12]
getUpTo :: [Int] -> Int -> [Int]
getUpTo (x:xs) max =
if max < 10 -- (<), not (<=)
then
-- return a list that still contains x;
-- can't reassign to max, but can send a
-- different value on to the next
-- iteration of getUpTo
x : getUpTo xs (max + x)
else
[] -- don't want to return any more values here
I am fairly new to Haskell. I just started with it a few hours ago and as such I see in every question a challenge that helps me get out of the imperative way of thinking and a opportunity to practice my recursion thinking :)
I gave some thought to the question and I came up with this, perhaps, naive solution:
upToBound :: (Integral a) => [a] -> a -> [a]
upToBound (x:xs) bound =
let
summation _ [] = []
summation n (m:ms)
| n + m <= bound = m:summation (n + m) ms
| otherwise = []
in
summation 0 (x:xs)
I know there is already a better answer, I just did it for the fun of it.
I have the impression that I changed the signature of the original invocation, because I thought it was pointless to provide an initial zero to the outer function invocation, since I can only assume it can only be zero at first. As such, in my implementation I hid the seed from the caller and provided, instead, the maximum bound, which is more likely to change.
upToBound [1,2,3,4,5,6,7,8,9,0] 10
Which outputs: [1,2,3,4]
I'm new to erlang. I wonder how to write a function which returns the first N elements in a list?
I've tried:
take([],_) -> [];
take([H|T],N) when N > 0 -> take([H,hd(L)|tl(T)], N-1);
take([H|T],N) when N == 0 -> ... (I'm stuck here...)
Any hint? thx
Update: I know there's a function called "sublist" but I need to figure out how to write that function by my own.
I finally figured out the answer:
-module(list).
-export([take/2]).
take(List,N) -> take(List,N,[]).
take([],_,[]) -> [];
take([],_,List) -> List;
take([H|T], N, List) when N > 0 -> take(T, N-1, lists:append(List,[H]));
take([H|T], N, List) when N == 0 -> List.
In Erlang, take is spelled lists:sublist:
L = [1, 2, 3, 4];
lists:sublist(L, 3). % -> [1, 2, 3]
A simple solution is:
take([H|T], N) when N > 0 ->
[H|take(T, N-1)];
take(_, 0) -> [].
This will generate an error if there are not enough elements in the list.
When you use an accumulator as you are doing you do not usually append elements to the end of it as this is very inefficient (you copy the whole list each time). You would normally push elements on to it with [H|List]. It will then be in the reverse order but you then do a lists:reverse(List) to return them in the right order.
take(List, N) -> take(List, N, []).
take([H|T], N, Acc) when N > 0 ->
take(T, N-1, [H|Acc]);
take(_, 0, Acc) -> lists:reverse(Acc).
The accumulator version is tail recursive which is a Good Thing but you need to do an extra reverse which removes some of the benefits. The first version I think is clearer. There is no clear case for either.