Lazy vs eager evaluation and double linked list building - list

I can't sleep! :)
I've written small program building double linked list in Haskell. The basic language's property to make it was lazy evaluation (see the bunch of code below). And my question is can I do the same in a pure functional language with eager evaluation or not? In any case, what properties eager functional language must have to be able to build such structure (impurity?)?
import Data.List
data DLList a = DLNull |
DLNode { prev :: DLList a
, x :: a
, next :: DLList a
}
deriving (Show)
walkDLList :: (DLList a -> DLList a) -> DLList a -> [a]
walkDLList _ DLNull = []
walkDLList f n#(DLNode _ x _) = x : walkDLList f (f n)
-- Returns first and last items.
makeDLList :: [a] -> (DLList a, DLList a)
makeDLList xs = let (first, last) = step DLNull xs in (first, last)
where
step prev [] = (DLNull, prev)
-- Here I use laziness. 'next' is not built yet, it's a thunk.
step prev (x : xs) = let this = DLNode prev x next
(next, last) = step this xs
in (this, last)
testList :: [Int] -> IO ()
testList l = let
(first, last) = makeDLList l
byNext = walkDLList next first
byPrev = walkDLList prev last
in do
putStrLn $ "Testing: " ++ show l
print byNext
print byPrev
main = do
testList []
testList [1, 2, 3, 4]

A doubly-linked list can be implemented in a purely functional way in an eager language as a zipper on a singly-linked list. See, for example, Rosetta Code > Doubly-linked list > OCaml > Functional.

As long as a language has something like closures, lambdas etc. you can always simulate lazyness. You could rewrite that code even in Java (without mutating variables etc), you just need to wrap every "lazy" operation in something like
interface Thunk<A> {
A eval();
}
Of course this would look terrible, but it is possible.

In the non-backtracking subset of Prolog, which can be seen as explicitly set-once eager pure functional language, you can build the doubly-linked lists easily. It's the referential transparency that makes it hard in Haskell, for it forbids the Prolog's explicit setting of the named, explicitly not-yet-set logical variables, and instead forces Haskell to achieve same effect in the warped way of "tying the knot". I think.
Plus, there really isn't much difference between Haskell's guarded recursion under lazy evaluation vs. Prolog's open-ended lists built in tail-recursion modulo cons fashion. IMO. Here's for instance an example of lazy lists in Prolog. The memoized shared storage is used as universal access mediator, so the results of previous calculations can be arranged to be cached.
Come to think of it, you can use C in a restrictive manner as an eager pure functional language, if you never reset any variable nor any pointer once it is set. You still have null pointers, just as Prolog has variables, so it is too, explicitly set-once. And of course you can build doubly-linked lists with it.
So the only question that remains is, do you admit such set-once languages as pure?

Related

Haskell self-referential List termination

EDIT: see this followup question that simplifies the problem I am trying to identify here, and asks for input on a GHC modification proposal.
So I was trying to write a generic breadth-first search function and came up with the following:
bfs :: (a -> Bool) -> (a -> [a]) -> [a] -> Maybe a
bfs predf expandf xs = find predf bfsList
where bfsList = xs ++ concatMap expandf bfsList
which I thought was pretty elegant, however in the does-not-exist case it blocks forever.
After all the terms have been expanded to [], concatMap will never return another item, so concatMap is blocking waiting for another item from itself? Could Haskell be made smart enough to realize the list generation is blocked reading the self-reference and terminate the list?
The best replacement I've been able to come up with isn't quite as elegant, since I have to handle the termination case myself:
where bfsList = concat.takeWhile (not.null) $ iterate (concatMap expandf) xs
For concrete examples, the first search terminates with success, and the second one blocks:
bfs (==3) (\x -> if x<1 then [] else [x/2, x/5]) [5, 3*2**8]
bfs (==3) (\x -> if x<1 then [] else [x/2, x/5]) [5, 2**8]
Edited to add a note to explain my bfs' solution below.
The way your question is phrased ("could Haskell be made smart enough"), it sounds like you think the correct value for a computation like:
bfs (\x -> False) (\x -> []) []
given your original definition of bfs should be Nothing, and Haskell is just failing to find the correct answer.
However, the correct value for the above computation is bottom. Substituting the definition of bfs (and simplifying the [] ++ expression), the above computation is equal to:
find (\x -> False) bfsList
where bfsList = concatMap (\x -> []) bfsList
Evaluating find requires determining if bfsList is empty or not, so it must be forced to weak head normal form. This forcing requires evaluating the concatMap expression, which also must determine if bfsList is empty or not, forcing it to WHNF. This forcing loop implies bfsList is bottom, and therefore so is find.
Haskell could be smarter in detecting the loop and giving an error, but it would be incorrect to return [].
Ultimately, this is the same thing that happens with:
foo = case foo of [] -> []
which also loops infinitely. Haskell's semantics imply that this case construct must force foo, and forcing foo requires forcing foo, so the result is bottom. It's true that if we considered this definition an equation, then substituting foo = [] would "satisfy" it, but that's not how Haskell semantics work, for the same reason that:
bar = bar
does not have value 1 or "awesome", even though these values satisfy it as an "equation".
So, the answer to your question is, no, this behavior couldn't be changed so as to return an empty list without fundamentally changing Haskell semantics.
Also, as an alternative that looks pretty slick -- even with its explicit termination condition -- maybe consider:
bfs' :: (a -> Bool) -> (a -> [a]) -> [a] -> Maybe a
bfs' predf expandf = look
where look [] = Nothing
look xs = find predf xs <|> look (concatMap expandf xs)
This uses the Alternative instance for Maybe, which is really very straightforward:
Just x <|> ... -- yields `Just x`
Nothing <|> Just y -- yields `Just y`
Nothing <|> Nothing -- yields `Nothing` (doesn't happen above)
so look checks the current set of values xs with find, and if it fails and returns Nothing, it recursively looks in their expansions.
As a silly example that makes the termination condition look less explicit, here's its double-monad (Maybe in implicit Reader) version using listToMaybe as the terminator! (Not recommended in real code.)
bfs'' :: (a -> Bool) -> (a -> [a]) -> [a] -> Maybe a
bfs'' predf expandf = look
where look = listToMaybe *>* find predf *|* (look . concatMap expandf)
(*>*) = liftM2 (>>)
(*|*) = liftM2 (<|>)
infixl 1 *>*
infixl 3 *|*
How does this work? Well, it's a joke. As a hint, the definition of look is the same as:
where look xs = listToMaybe xs >>
(find predf xs <|> look (concatMap expandf xs))
We produce the results list (queue) in steps. On each step we consume what we have produced on the previous step. When the last expansion step added nothing, we stop:
bfs :: (a -> Bool) -> (a -> [a]) -> [a] -> Maybe a
bfs predf expandf xs = find predf queue
where
queue = xs ++ gen (length xs) queue -- start the queue with `xs`, and
gen 0 _ = [] -- when nothing in queue, stop;
gen n q = let next = concatMap expandf (take n q) -- take n elemts from queue,
in next ++ -- process, enqueue the results,
gen (length next) (drop n q) -- advance by `n` and continue
Thus we get
~> bfs (==3) (\x -> if x<1 then [] else [x/2, x/5]) [5, 3*2**8]
Just 3.0
~> bfs (==3) (\x -> if x<1 then [] else [x/2, x/5]) [5, 2**8]
Nothing
One potentially serious flow in this solution is that if any expandf step produces an infinite list of results, it will get stuck calculating its length, totally needlessly so.
In general, just introduce a counter and increment it by the length of solutions produced at each expansion step (length . concatMap expandf or something), decrementing by the amount that was consumed. When it reaches 0, do not attempt to consume anything anymore because there's nothing to consume at that point, and you should instead terminate.
This counter serves in effect as a pointer back into the queue being constructed. A value of n indicates that the place where the next result will be placed is n notches ahead of the place in the list from which the input is taken. 1 thus means that the next result is placed directly after the input value.
The following code can be found in Wikipedia's article about corecursion (search for "corecursive queue"):
data Tree a b = Leaf a | Branch b (Tree a b) (Tree a b)
bftrav :: Tree a b -> [Tree a b]
bftrav tree = queue
where
queue = tree : gen 1 queue -- have one value in queue from the start
gen 0 _ = []
gen len (Leaf _ : s) = gen (len-1) s -- consumed one, produced none
gen len (Branch _ l r : s) = l : r : gen (len+1) s -- consumed one, produced two
This technique is natural in Prolog with top-down list instantiation and logical variables which can be explicitly in a not-yet-set state. See also tailrecursion-modulo-cons.
gen in bfs can be re-written to be more incremental, which is usually a good thing to have:
gen 0 _ = []
gen n (y:ys) = let next = expandf y
in next ++ gen (n - 1 + length next) ys
bfsList is defined recursively, which is not in itself a problem in Haskell. It does, however, produce an infinite list, which, again, isn't in itself a problem, because Haskell is lazily evaluated.
As long as find eventually finds what it's looking for, it's not an issue that there's still an infinity of elements, because at that point evaluation stops (or, rather, moves on to do other things).
AFAICT, the problem in the second case is that the predicate is never matched, so bfsList just keeps producing new elements, and find keeps on looking.
After all the terms have been expanded to [] concatMap will never return another item
Are you sure that's the correct diagnosis? As far as I can tell, with the lambda expressions supplied above, each input element always expand to two new elements - never to []. The list is, however, infinite, so if the predicate goes unmatched, the function will evaluate forever.
Could Haskell be made smart enough to realize the list generation is blocked reading the self-reference and terminate the list?
It'd be nice if there was a general-purpose algorithm to determine whether or not a computation would eventually complete. Alas, as both Turing and Church (independently of each other) proved in 1936, such an algorithm can't exist. This is also known as the Halting problem. I'm not a mathematician, though, so I may be wrong, but I think it applies here as well...
The best replacement I've been able to come up with isn't quite as elegant
Not sure about that one... If I try to use it instead of the other definition of bfsList, it doesn't compile... Still, I don't think the problem is the empty list.

How can I get a value out of a data in Haskell

I have the following data:
data LinkedList a = Node a (LinkedList a) | Empty deriving (Show)
And I would like to know how to get a single value out of it, without pattern matching.
So in a C-based language: list.value
Jared Loomis, it sounds like you want to get access to the different parts of your LinkedList with out having to write your own helper functions. So in that light there is an alternative technique of writing data constructors that write these helper functions for you.
data LinkedList a = Node { nodeHead :: a, rest :: LinkedList a} | Empty
deriving (Show)
example usage:
*Main> let example = Node 1 (Node 2 (Node 3 Empty))
*Main> example
Node {nodeHead = 1, rest = Node {nodeHead = 2, rest = Node {nodeHead = 3, rest = Empty}}}
*Main> nodeHead example
1
*Main> nodeHead . rest $ example
2
*Main> nodeHead . rest . rest $ example
3
Careful though nodeHead and rest are considered partial functions and throw an exception when used on Empty:
*Main> nodeHead Empty
*** Exception: No match in record selector nodeHead
*Main> rest Empty
*** Exception: No match in record selector rest
If you want something with post fix syntax I would recommend the
lens package.
{-# LANGUAGE TemplateHaskell #-}
import Control.Lens
data LinkedList' a = Node' { _nodeHead' :: a, _rest' :: LinkedList' a} | Empty'
deriving (Show)
makeLenses ''LinkedList'
*Main> example ^? rest'
Just (Node' {_nodeHead' = 2, _rest' = Node' {_nodeHead' = 3, _rest' = Empty'}})
*Main> example ^? rest' . nodeHead'
Just 2
Rather than present a Haskell solution to your question, I will present a more realistic comparison with C and suggest that you don't really want what you seem to be asking for:
struct list {
int value;
struct list *next;
};
int main(void) {
struct list *list = NULL;
int val;
/* Goodbye, cruel world! */
val = list->value;
/* If I had "pattern-matched"... */
if (list == NULL) {
val = 0;
} else {
val = list->value;
}
return 0;
}
When you don't check for the NULL case in C (which corresponds to pattern matching in Haskell) you crash with a SEGFAULT at some point in the execution of your program instead of getting a compile error.
In other words, you can't get a value out of a 'possibly empty' recursive data type without doing case analysis in C either! At least not if you value the stability of your program. Haskell not only insists that you do the right thing, but provides convenient syntax to help you to do so!
As mentioned in other answers, record definition syntax provides you with convenient projections (i.e. accessor functions) automatically, which have a bit of a trade-off in comparison to accessing struct members in C: The projections are first-class, and can thus be used as parameters and return values; but they are in the same namespace as all other functions, which can lead to unfortunate name clashes.
So, in the case of straightforward data types (i.e. non-recursive) the syntax for accessing members is roughly at the same level of convenience: whole.part for C-like languages vs. part whole for Haskell.
For recursive types (like the one in your example) where one or more members reference possibly-empty instances of the same type, case analysis is necessary in either language before a value can be extracted. Here, you will either need to wrap the field access in your C-like language with a case analysis or possibly an exception handler. In Haskell, you have various forms of syntactic sugar for pattern matching that tend to be much more concise.
Furthermore, see the answer regarding Monads for ways of providing even more convenience for working with 'possibly empty' types in Haskell, hiding most of the intermediate pattern matching for multi-step computations inside library functions.
To wrap this up: My point is that as you take the time to learn the patterns and idioms of Haskell, you will likely find yourself missing the ways of doing things in C-like languages less and less.
Let's use monads to do what you'd like. Monads are great because when defining them, you get to redefine what ; and = mean for you. (This being Haskell, we use newlines and indentation to indicate where ; goes and <- to differentiate from the permanent definition =.)
I'll have to use pattern matching to make the instances, because I've got nothing else to go on yet:
instance Monad LinkedList where
Empty >>= f = Empty
(Node a as) >>= f = f a `andthen` (as >>= f)
return a = Node a Empty
The binding operator >>= is the configurable plumbing behind the <- operator. Here we've chosen ; to mean next element, using a helper function addthen in the works:
andthen :: LinkedList a -> LinkedList a -> LinkedList a
Empty `andthen` list = list
(Node a list) `andthen` therest = Node a (list `andthen` therest)
Now we can use monad notation to grab a value at a time. For example, let's apply a function to the elements in a linked list:
applyToElements :: (a -> b) -> LinkedList a -> LinkedList b
applyToElements f list = do
val <- list
return (f val)
ghci> applyToElements ( ++ ", yeah" ) (Node "Hello" (Node "there" Empty))
Node "Hello, yeah" (Node "there, yeah" Empty)
A simpler way
I simply wouldn't have defined that that way. I'd have used pattern matching directly:
applyToElements :: (a -> b) -> LinkedList a -> LinkedList b
applyToElements f Empty = Empty
applyToElements f (Node a list) = Node (f a) (applyToElements f list)
and then declared
instance Functor LinkedList where
fmap = applyToElements
because the usual name for a function that applies another function elementwise is fmap.
More complicated
Monads can be good for other things though, and sometimes it's the best way of expressing something:
combinationsWith :: (a -> b -> c) -> LinkedList a -> LinkedList b -> LinkedList c
combinationsWith f list otherlist = do -- do automatically traverses the structure
val <- list -- works like val = list.value
otherval <- otherlist -- otherval = otherlist.value
return (f val otherval) -- once for each value/othervalue
Because we chose to use andthen when we defined <- for LinkedList, if we use two lists, it'll use the first list and then the second in a sort of nested subloop kind of way, so the otherval values change more frequently than the first vals, so we get:
ghci> combinationsWith (+) (Node 3 (Node 4 Empty)) (Node 10 (Node 100 Empty))
Node 13 (Node 103 (Node 14 (Node 104 Empty)))
As a rule you don't need to extract all values.
If you really wish to extract, use Comonad extract function:
class Functor w => Comonad w where
extract :: w a -> a
...
Often Foldable, Traversable, Monoids, Monad, Zippers are more useful
For the sake of completeness let me mention something I've heard the name "dot hack" for:
Prelude> data LinkedList a = Node { nodeHead :: a, nodeRest :: LinkedList a} | Empty deriving (Show)
Prelude> let example = Node 1 (Node 2 (Node 3 Empty)) :: LinkedList Int
Prelude> let (.) = flip ($)
Prelude> example.nodeRest.nodeHead
2
It's simply realizing that the C-style accessing . is the same as applying accessor functions to the object mentioned before it, which in Haskell means to turn around the arguments of the application operator ($).
Of course, one probably wouldn't use this in real code, since it will cause confusion among other people and the loss of the composition operator.
I'd just pattern match.
llHead :: LinkedList a -> a
llHead Empty = error "kaboom"
llHead (Node x _) = x
If you want the element at a specific index, try something like this (which also uses pattern matching):
llIdx :: LinkedList a -> Int -> a
llIdx l i = go l i
where go Empty _ = error "out of bounds"
go (Node x _) 0 = x
go (Node _ xs) j = go xs (j - 1)
Some assurance that this works:
import Test.QuickCheck
fromList [] = Empty
fromList (x:xs) = Node x (fromList xs)
allIsGood xs i = llIdx (fromList xs) i == xs !! i
llIdxWorksLikeItShould (NonEmpty xs) =
let reasonableIndices = choose (0, length xs - 1) :: Gen Int
in forAll reasonableIndices (allIsGood xs)
-- > quickCheck llIdxWorksLikeItShould
-- +++ OK, passed 100 tests.

SML: How can I pass a function a list and return the list with all negative reals removed?

Here's what I've got so far...
fun positive l1 = positive(l1,[],[])
| positive (l1, p, n) =
if hd(l1) < 0
then positive(tl(l1), p, n # [hd(l1])
else if hd(l1) >= 0
then positive(tl(l1), p # [hd(l1)], n)
else if null (h1(l1))
then p
Yes, this is for my educational purposes. I'm taking an ML class in college and we had to write a program that would return the biggest integer in a list and I want to go above and beyond that to see if I can remove the positives from it as well.
Also, if possible, can anyone point me to a decent ML book or primer? Our class text doesn't explain things well at all.
You fail to mention that your code doesn't type.
Your first function clause just has the variable l1, which is used in the recursive. However here it is used as the first element of the triple, which is given as the argument. This doesn't really go hand in hand with the Hindley–Milner type system that SML uses. This is perhaps better seen by the following informal thoughts:
Lets start by assuming that l1 has the type 'a, and thus the function must take arguments of that type and return something unknown 'a -> .... However on the right hand side you create an argument (l1, [], []) which must have the type 'a * 'b list * 'c list. But since it is passed as an argument to the function, that must also mean that 'a is equal to 'a * 'b list * 'c list, which clearly is not the case.
Clearly this was not your original intent. It seems that your intent was to have a function that takes an list as argument, and then at the same time have a recursive helper function, which takes two extra accumulation arguments, namely a list of positive and negative numbers in the original list.
To do this, you at least need to give your helper function another name, such that its definition won't rebind the definition of the original function.
Then you have some options, as to which scope this helper function should be in. In general if it doesn't make any sense to be calling this helper function other than from the "main" function, then it should not be places in a scope outside the "main" function. This can be done using a let binding like this:
fun positive xs =
let
fun positive' ys p n = ...
in
positive' xs [] []
end
This way the helper function positives' can't be called outside of the positive function.
With this take care of there are some more issues with your original code.
Since you are only returning the list of positive integers, there is no need to keep track of the
negative ones.
You should be using pattern matching to decompose the list elements. This way you eliminate the
use of taking the head and tail of the list, and also the need to verify whether there actually is
a head and tail in the list.
fun foo [] = ... (* input list is empty *)
| foo (x::xs) = ... (* x is now the head, and xs is the tail *)
You should not use the append operator (#), whenever you can avoid it (which you always can).
The problem is that it has a terrible running time when you have a huge list on the left hand
side and a small list on the right hand side (which is often the case for the right hand side, as
it is mostly used to append a single element). Thus it should in general be considered bad
practice to use it.
However there exists a very simple solution to this, which is to always concatenate the element
in front of the list (constructing the list in reverse order), and then just reversing the list
when returning it as the last thing (making it in expected order):
fun foo [] acc = rev acc
| foo (x::xs) acc = foo xs (x::acc)
Given these small notes, we end up with a function that looks something like this
fun positive xs =
let
fun positive' [] p = rev p
| positive' (y::ys) p =
if y < 0 then
positive' ys p
else
positive' ys (y :: p)
in
positive' xs []
end
Have you learned about List.filter? It might be appropriate here - it takes a function (which is a predicate) of type 'a -> bool and a list of type 'a list, and returns a list consisting of only the elements for which the predicate evaluates to true. For example:
List.filter (fn x => Real.>= (x, 0.0)) [1.0, 4.5, ~3.4, 42.0, ~9.0]
Your existing code won't work because you're comparing to integers using the intversion of <. The code hd(l1) < 0 will work over a list of int, not a list of real. Numeric literals are not automatically coerced by Standard ML. One must explicitly write 0.0, and use Real.< (hd(l1), 0.0) for your test.
If you don't want to use filter from the standard library, you could consider how one might implement filter yourself. Here's one way:
fun filter f [] = []
| filter f (h::t) =
if f h
then h :: filter f t
else filter f t

How to express a filter that relies on adjacent elements in a list, functionally

Several times I've wanted to traverse a list and pick out elements that have some property which also relies on, say, the next element in the list. For a simple example I have some code which counts how many times a function f changes sign over a specified interval [a,b]. This is fairly obvious in an imperative language like C:
for(double x=a; x<=b; x+=(b-a)/n){
s*f(x)>0 ? : printf("%e %e\n",x, f(x)), s=sgn(f(x));
}
In Haskell my first instinct was to zip the list with its tail and then apply the filter and extract the elements with fst or whatever. But that seems clumsy and inefficient, so I shoehorned it into being a fold:
signChanges f a b n = tail $
foldl (\(x:xs) y -> if (f x*f y)<0 then y:x:xs else x:xs) [a] [a,a+(b-a)/n..b]
Either way I feel there is a "right" way to do this (as there so often is in Haskell) and that I don't know (or just haven't realised) what it is. Any help with how to express this in a more idiomatic or elegant way would be greatly appreciated, as would advice on how, in general, to find the "right" way to do things.
Zipping is efficient if you run with -O2 as list fusion engages. No need to resort to folds in this case is one of essential advantages of Haskell as it improves modularity.
So zipping is the right way to do it.
Here is a "version" using a paramorphism (not quite the same as the question - but it should illustrate a paramorphism usefully enough), first we need para as it is not in the standard libraries:
-- paramorphism (generalizes fold)
para :: (a -> ([a], b) -> b) -> b -> [a] -> b
para phi b = step
where step [] = b
step (x:xs) = phi x (xs, step xs)
Using a paramorphism is much like using a fold but as well as seeing the accumulator we can see the rest of input:
countSignChanges :: [Int] -> Int
countSignChanges = para phi 0
where
phi x ((y:_),st) = if signum x /= signum y then st+1 else st
phi x ([], st) = st
demo = countSignChanges [1,2,-3,4,-5,-6]
The nice thing about para compared to zipping against the tail is that we can peek as far as we want into the rest of input.
if you need to calculate value for i-th element, but depending on j-th element of the list, it's better to convert list to Array, either mutable or immutable.
So you will be able to do arbitrary computation based on index of current element either in fold, or in recursive calls.

List comprehension vs high-order functions in F#

I come from SML background and feel quite comfortable with high-order functions. But I don't really get the idea of list comprehension. Is there any situation where list comprehension is more suitable than high-order functions on List and vice versa?
I heard somewhere that list comprehension is slower than high-order functions, should I avoid to use it when writing performance-critical functions?
For the example' sake, take a look at Projecting a list of lists efficiently in F# where #cfern's answer contains two versions using list comprehension and high-order functions respectively:
let rec cartesian = function
| [] -> [[]]
| L::Ls -> [for C in cartesian Ls do yield! [for x in L do yield x::C]]
and:
let rec cartesian2 = function
| [] -> [[]]
| L::Ls -> cartesian2 Ls |> List.collect (fun C -> L |> List.map (fun x->x::C))
Choosing between comprehensions and higher-order functions is mostly a matter of style. I think that comprehensions are sometimes more readable, but that's just a personal preference. Note that the cartesian function could be written more elegantly like this:
let rec cartesian = function
| [] -> [[]]
| L::Ls ->
[ for C in cartesian Ls do for x in L do yield x::C ]
The interesting case is when writing recursive functions. If you use sequences (and sequence comprehensions), they remove some unnecessary allocation of temporary lists and if you use yield! in a tail-call position, you can also avoid stack overflow exceptions:
let rec nums n =
if n = 100000 then []
else n::(nums (n+1))
// throws StackOverflowException
nums 0
let rec nums n = seq {
if n < 100000 then
yield n
yield! nums (n+1) }
// works just fine
nums 0 |> List.ofSeq
This is quite an interesting pattern, because it cannot be written in the same way using lists. When using lists, you cannot return some element and then make a recursive call, because it corresponds to n::(nums ...), which is not tail-recursive.
Looking at the generated code in ILSpy, you can see that list comprehensions are compiled to state machines (like methods using yield return in C#), then passed to something like List.ofSeq. Higher-order functions, on the other hand, are hand-coded, and frequently use mutable state or other imperative constructs to be as efficient as possible. As is often the case, the general-purpose mechanism is more expensive.
So, to answer your question, if performance is critical there is usually a higher-order function specific to your problem that should be preferred.
Adding to Tomas Petricek's answer. You can make the list version tail recursive.
let nums3 n =
let rec nums3internal acc n =
if n = 100000 then
acc
else
nums3internal (n::acc) (n+1) //Tail Call Optimization possible
nums3internal [] n |> List.rev
nums3 0
With the added benefit of a considerable speedup. At least when I measured with the stopwatch tool I get. (nums2 being the algorithm using Seq).
Nums2 takes 81.225500ms
Nums3 takes 4.948700ms
For higher numbers this advantage shrinks, because List.rev is inefficient. E.g. for 10000000 I get:
Nums2 takes 11054.023900ms
Nums3 takes 8256.693100ms