I have a list of nodes and edges, represented as tuples where the first element is a node, and the second element is a list of all nodes it has an edge to. I am trying to reverse the list like so:
ghci> snuN [("a",["b"]),("b",["c"]),("c",["a","d"]),("e",["d"])]
ghci> [("a",["c"]),("b",["a"]),("c",["b"]),("d",["c","e"]),("e",[])]
So far, I've written this code:
snuH :: Eq t => [(t,[t])] -> [(t,[t])]
snuH [] = []
snuH ps#((x, xs):rest) =
if (length xs <= 1) && not (x `isInSublist` ps)
then [(y,[x])| y <- xs] ++ snuH rest ++ [(x, [])]
else [(y,[x])| y <- xs] ++ snuH rest
isInSublist :: Eq t => t -> [(t,[t])] -> Bool
isInSublist _ [] = False
isInSublist x ((y, ys):rest) = (x `elem` ys) || isInSublist x rest
combine :: Eq t => [(t,[t])] -> [(t,[t])]
combine ps#((x, xs):(y, ys):rest) = if x == y then (x, xs++ys):rest else (x, xs):combine((y, ys):rest)
snuN :: Eq t => [(t, [t])] -> [(t, [t])]
snuN ls = combine $ snuH ls
The first function gives me this output:
ghci> snuH [("a",["b"]),("b",["c"]),("c",["a","d"]),("e",["d"])]
ghci> [("b",["a"]),("c",["b"]),("a",["c"]),("d",["c"]),("d",["e"]),("e",[]),("b",[])]
Which is not quite the result I wanted, because it creates two tuples with the same first element (("d",["c"]),("d",["e"])), and it has the extra ("b",[]) as an element when it shouldn't. I wrote the combine helper-function to fix the problem, which gives me this output:
ghci> snuN [("a",["b"]),("b",["c"]),("c",["a","d"]),("e",["d"])]
ghci> [("b",["a"]),("c",["b"]),("a",["c"]),("d",["c","e"]),("e",[]),("b",[])]
Which fixes the problem with the two tuples with the same first element, but I still have the extra ("b",[]) which I can't figure out how to fix, I assume there's something wrong with my snuH but I can't see where the problem is.
Can you tell me what im doing wrong here? I don't understan why I get the extra ("b",[]). All help is appreciated!
I'd argue that the following list comprehension gives you what you need:
type Graph node = [(node, [node])]
converse :: Eq node => Graph node -> Graph node
converse g = [(v, [e | (e, es) <- g, v `elem` es]) | (v, _) <- g]
However, if you try it out, you'll get:
> converse [("a",["b"]),("b",["c"]),("c",["a","d"]),("e",["d"])]
[("a",["c"]),("b",["a"]),("c",["b"]),("e",[])]
Compared to the example you gave, the entry for "d" is missing from the output. That's because the input did not mention an explicit entry ("d", []).
To compensate for this, we could put a bit more logic in retrieving the complete list of nodes from the graph, also accounting for the "implied" ones:
nodes :: Eq node => Graph node -> [node]
nodes g = nub $ concat [v : es | (v, es) <- g]
Note: this requires importing nub from Data.List.
Then, we can write:
converse' :: Eq node => Graph node -> Graph node
converse' g = [(v, [e | (e, es) <- g, v `elem` es]) | v <- nodes g]
And, indeed, we yield:
> converse' [("a",["b"]),("b",["c"]),("c",["a","d"]),("e",["d"])]
[("a",["c"]),("b",["a"]),("c",["b"]),("d",["c","e"]),("e",[])]
You have [(a, [a])], which maps nodes to the nodes they have an edge to. One approach to "reversing" this is to first convert it to a list of all the edges. We can actually generalize the type a bit here, to distinguish from and to nodes.
allEdges :: [(a, [b])] -> [(a, b)]
allEdges g = [(a, b) | (a, bs) <- g, b <- bs]
Now it's just a matter of gathering up the nodes with an edge to each particular node:
import Data.Map.Strict (Map)
import qualified Data.Map.Strict as M
gather :: Ord b => [(a,b)] -> Map b [a]
gather edges = M.fromListWith (++) [(b, [a]) | (a, b) <- edges]
Now we can just use M.assocs to convert that map to a list!
The above code will leave out nodes that have no edges going to them. We can patch that up with a bit of extra work.
reverseGraph :: Ord a => [(a, [a])] -> [(a, [a])]
reverseGraph = M.assocs . M.fromListWith (++) . gunk
where
gunk g = [q | (a, bs) <- g, q <- (a, []) : [(b, [a]) | b <- bs]]
The idea here is that when we see (a, bs), we insert the empty edge set (a, []) along with the nonempty ones (b, [a] for each b in bs.
Related
I try to remove the Integer duplicates of a List of (String, Int), where I am guaranteed that there is no String duplicate.
Is it possible to evaluate something like this in Haskell:
I tried:
[(a,b) | (a,b) <- bs, (c,k) <- bs, ((k == b) <= (a == c))]
but this does not yet work.
Edit: I am well aware, that you can achieve that using more complex syntax. For example by recursively searching the List for each elements duplicates...
(NB: this is a completely new version of this answer. Previous was totally off-base.)
To follow your mathematical set comprehension more closely, we can tweak the definition in your answer as
uniquesOnly :: (Eq a, Eq b) => [(a, b)] -> [(a, b)]
uniquesOnly bs =
[(a,b) | (a,b) <- bs,
[(c,d) | (c,d) <- bs, d == b] ==
[(a,d) | (c,d) <- bs, d == b]]
"for all (c,d) in bs such that d==b it follows c==a".
uniquesOnly [(1,1),(2,2),(3,1)] returns [(2,2)].
This is a possible solution:
For example, I have made up this equivalent statement:
removeDuplicates :: [(String, Int)] -> [(String, Int)]
removeDuplicates bs =
[(a,b) | (a,b) <- bs,
length [(c,d) | (c,d) <- bs, d == b] == 1]
But this is not the same statement only an equal one.
The existing answers don't take advantage of the guarantee that the strings are unique or the fact that Int is ordered. Here's one that does.
import Data.List (sortBy, groupBy)
import Data.Function (on)
uniquesOnly :: Ord b => [(a, b)] -> [(a, b)]
uniquesOnly ps
= [ p
| [p] <- groupBy ((==) `on` snd) .
sortBy (compare `on` snd) $ ps ]
Say you have a list of numbers, [1,2,3,5,6,7,8,9,11,12,15,16,17]
and you want a function that takes that as an input and returns something like
[[1,3],[5,9],[11,12],[15,17]] or alternatively maybe
[(1,3), (5,9), (11,12), (15,17)]
how would this be done? all of the solutions i've found online are very very long and quite convoluted, when this seems like such an easy problem for a functional language like haskell
So we have a list of numbers,
xs = [1,2,3,5,6,7,8,9,11,12,14,16,17] -- 14 sic!
We turn it into a list of segments,
ys = [[x,x+1] | x <- xs]
-- [[1,2], [2,3], [3,4], [5,6], ..., [11,12], [12,13], [14,15], [16,17], [17,18] ]
we join the touching segments,
zs = foldr g [] ys
-- [[1,4], [5,10], [11,13], [14,15], [16,18]]
where
g [a,b] [] = [[a,b]]
g [a,b] r#([c,d]:t) | b==c = [a,d]:t
| otherwise = [a,b]:r
and we subtract 1 from each segment's ending value,
ws = [[a,b-1] | [a,b] <- zs]
-- [[1,3], [5,9], [11,12], [14,14], [16,17]]
All in all we get
ranges :: (Num t, Eq t) => [t] -> [[t]]
ranges = map (\[a,b] -> [a,b-1]) . foldr g [] . map (\x -> [x,x+1])
where
g [a,b] [] = [[a,b]]
g [a,b] r#([c,d]:t) | b==c = [a,d]:t
| otherwise = [a,b]:r
Simple and clear.
edit: or, to be properly lazy,
where
g [a,b] r = [a,x]:y
where
(x,y) = case r of ([c,d]:t) | b==c -> (d,t) -- delay forcing
_ -> (b,r)
update: as dfeuer notes, (a,a) type is better than [a,a]. Wherever [P,Q] appears in this code, replace it with (P,Q). This will improve the code, with zero cost to readability.
I would definitely prefer the alternative representation to the first one you give.
ranges :: (Num a, Eq a) => [a] -> [(a,a)]
ranges [] = []
ranges (a : as) = ranges1 a as
-- | A version of 'ranges' for non-empty lists, where
-- the first element is supplied separately.
ranges1 :: (Num a, Eq a) => a -> [a] -> [(a,a)]
ranges1 a as = (a, b) : bs
where
-- Calculate the right endpoint and the rest of the
-- result lazily, when needed.
(b, bs) = finish a as
-- | This takes the left end of the current interval
-- and the rest of the list and produces the right endpoint of
-- that interval and the rest of the result.
finish :: (Num a, Eq a) => a -> [a] -> (a, [(a, a)])
finish l [] = (l, [])
finish l (x : xs)
| x == l + 1 = finish x xs
| otherwise = (l, ranges1 x xs)
To solve the Rosetta Code problem linked in the comment above, this isn't really quite an optimal representation. I'll try to explain how to match the representation more precisely later.
So one might do it like the idea from #Will Ness on the stateful folding or mine under the same answer. All explanations are to be found there. Besides, if you get curious and want to read more about it then have a look at Haskell Continuation Passing Style page. I am currently trying to gerealize this in such a way that we can have a variant of foldr1 in a stateful manner. A foldS :: Foldable t => (a -> a -> b) -> t a -> b. However this is still not general stateful folding. It's just tailored to this question.
ranges :: (Ord a, Num a) => [a] -> [[a]]
ranges xs = foldr go return xs $ []
where
go :: (Ord a, Num a) => a -> ([a] -> [[a]]) -> ([a] -> [[a]])
go c f = \ps -> let rrs#(r:rs) = f [c]
in case ps of
[] -> [c]:r:rs
[p] -> if p + 1 == c then rrs else [p]:(c:r):rs
*Main> ranges [1,2,3,5,6,7,8,9,11,12,15,16,17]
[[1,3],[5,9],[11,12],[15,17]]
I haven't had time to test any edge cases. All advices are welcome.
I am trying to learn Haskell and I want to solve one task. I have a list of Integers and I need to add them to another list if they are bigger then both of their neighbors. For Example:
I have a starting list of [0,1,5,2,3,7,8,4] and I need to print out a list which is [5, 8]
This is the code I came up but it returns an empty list:
largest :: [Integer]->[Integer]
largest n
| head n > head (tail n) = head n : largest (tail n)
| otherwise = largest (tail n)
I would solve this as outlined by Thomas M. DuBuisson. Since we want the ends of the list to "count", we'll add negative infinities to each end before creating triples. The monoid-extras package provides a suitable type for this.
import Data.Monoid.Inf
pad :: [a] -> [NegInf a]
pad xs = [negInfty] ++ map negFinite xs ++ [negInfty]
triples :: [a] -> [(a, a, a)]
triples (x:rest#(y:z:_)) = (x,y,z) : triples rest
triples _ = []
isBig :: Ord a => (a,a,a) -> Bool
isBig (x,y,z) = y > x && y > z
scnd :: (a, b, c) -> b
scnd (a, b, c) = b
finites :: [Inf p a] -> [a]
finites xs = [x | Finite x <- xs]
largest :: Ord a => [a] -> [a]
largest = id
. finites
. map scnd
. filter isBig
. triples
. pad
It seems to be working appropriately; in ghci:
> largest [0,1,5,2,3,7,8,4]
[5,8]
> largest [10,1,10]
[10,10]
> largest [3]
[3]
> largest []
[]
You might also consider merging finites, map scnd, and filter isBig in a single list comprehension (then eliminating the definitions of finites, scnd, and isBig):
largest :: Ord a => [a] -> [a]
largest xs = [x | (a, b#(Finite x), c) <- triples (pad xs), a < b, c < b]
But I like the decomposed version better; the finites, scnd, and isBig functions may turn out to be useful elsewhere in your development, especially if you plan to build a few variants of this for different needs.
One thing you might try is lookahead. (Thomas M. DuBuisson suggested a different one that will also work if you handle the final one or two elements correctly.) Since it sounds like this is a problem you want to solve on your own as a learning exercise, I’ll write a skeleton that you can take as a starting-point if you want:
largest :: [Integer] -> [Integer]
largest [] = _
largest [x] = _ -- What should this return?
largest [x1,x2] | x1 > x2 = _
| x1 < x2 = _
| otherwise = _
largest [x1,x2,x3] | x2 > x1 && x2 > x3 = _
| x3 > x2 = _
| otherwise = _
largest (x1:x2:x3:xs) | x2 > x1 && x2 > x3 = _
| otherwise = _
We need the special case of [x1,x2,x3] in addition to (x1:x2:x3:[]) because, according to the clarification in your comment, largest [3,3,2] should return []. but largest [3,2] should return [3]. Therefore, the final three elements require special handling and cannot simply recurse on the final two.
If you also want the result to include the head of the list if it is greater than the second element, you’d make this a helper function and your largest would be something like largest (x1:x2:xs) = (if x1>x2 then [x1] else []) ++ largest' (x1:x2:xs). That is, you want some special handling for the first elements of the original list, which you don’t want to apply to all the sublists when you recurse.
As suggested in the comments, one approach would be to first group the list into tuples of length 3 using Preludes zip3 and tail:
*Main> let xs = [0,1,5,2,3,7,8,4]
*Main> zip3 xs (tail xs) (tail (tail xs))
[(0,1,5),(1,5,2),(5,2,3),(2,3,7),(3,7,8),(7,8,4)]
Which is of type: [a] -> [b] -> [c] -> [(a, b, c)] and [a] -> [a] respectively.
Next you need to find a way to filter out the tuples where the middle element is bigger than the first and last element. One way would be to use Preludes filter function:
*Main> let xs = [(0,1,5),(1,5,2),(5,2,3),(2,3,7),(3,7,8),(7,8,4)]
*Main> filter (\(a, b, c) -> b > a && b > c) xs
[(1,5,2),(7,8,4)]
Which is of type: (a -> Bool) -> [a] -> [a]. This filters out elements of a list based on a Boolean returned from the predicate passed.
Now for the final part, you need to extract the middle element from the filtered tuples above. You can do this easily with Preludes map function:
*Main> let xs = [(1,5,2),(7,8,4)]
*Main> map (\(_, x, _) -> x) xs
[5,8]
Which is of type: (a -> b) -> [a] -> [b]. This function maps elements from a list of type a to b.
The above code stitched together would look like this:
largest :: (Ord a) => [a] -> [a]
largest xs = map (\(_, x, _) -> x) $ filter (\(a, b, c) -> b > a && b > c) $ zip3 xs (tail xs) (tail (tail xs))
Note here I used typeclass Ord, since the above code needs to compare with > and <. It's fine to keep it as Integer here though.
Is there a better and more concise way to write the following code in Haskell? I've tried using if..else but that is getting less readable than the following. I want to avoid traversing the xs list (which is huge!) 8 times to just separate the elements into 8 groups. groupBy from Data.List takes only one test condition function: (a -> a -> Bool) -> [a] -> [[a]].
x1 = filter (check condition1) xs
x2 = filter (check condition2) xs
x3 = filter (check condition3) xs
x4 = filter (check condition4) xs
x5 = filter (check condition5) xs
x6 = filter (check condition6) xs
x7 = filter (check condition7) xs
x8 = filter (check condition8) xs
results = [x1,x2,x3,x4,x5,x6,x7,x8]
This only traverses the list once:
import Data.Functor
import Control.Monad
filterN :: [a -> Bool] -> [a] -> [[a]]
filterN ps =
map catMaybes . transpose .
map (\x -> map (\p -> x <$ guard (p x)) ps)
For each element of the list, the map produces a list of Maybes, each Maybe corresponding to one of the predicates; it is Nothing if the element does not satisfy the predicate, or Just x if it does satisfy the predicate. Then, the transpose shuffles all these lists so that the list is organised by predicate, rather than by element, and the map catMaybes discards the entries for elements that did not satisfy a predicate.
Some explanation: x <$ m is fmap (const x) m, and for Maybe, guard b is if b then Just () else Nothing, so x <$ guard b is if b then Just x else Nothing.
The map could also be written as map (\x -> [x <$ guard (p x) | p <- ps]).
If you insist on one traversing the list only once, you can write
filterMulti :: [a -> Bool] -> [a] -> [[a]]
filterMulti fs xs = go (reverse xs) (repeat []) where
go [] acc = acc
go (y:ys) acc = go ys $ zipWith (\f a -> if f y then y:a else a) fs acc
map (\ cond -> filter (check cond) xs) [condition1, condition2, ..., condition8]
I think you could use groupWith from GHC.Exts.
If you write the a -> b function to assign every element in xs its 'class', I belive groupWith would split xs just the way you want it to, traversing the list just once.
groupBy doesn't really do what you're wanting; even if it did accept multiple predicate functions, it doesn't do any filtering on the list. It just groups together contiguous runs of list elements that satisfy some condition. Even if your filter conditions, when combined, cover all of the elements in the supplied list, this is still a different operation. For instance, groupBy won't modify the order of the list elements, nor will it have the possibility of including a given element more than once in the result, while your operation can do both of those things.
This function will do what you're looking for:
import Control.Applicative
filterMulti :: [a -> Bool] -> [a] -> [[a]]
filterMulti ps as = filter <$> ps <*> pure as
As an example:
> filterMulti [(<2), (>=5)] [2, 5, 1, -2, 5, 1, 7, 3, -20, 76, 8]
[[1, -2, 1, -20], [5, 5, 7, 76, 8]]
As an addendum to nietaki's answer (this should be a comment but it's too long, so if his answer is correct, accept his!), the function a -> b could be written as a series of nested if ... then .. else, but that is not very idiomatic Haskell and not very extensible. This might be slightly better:
import Data.List (elemIndex)
import GHC.Exts (groupWith)
f xs = groupWith test xs
where test x = elemIndex . map ($ x) $ [condition1, ..., condition8]
It categorises each element by the first condition_ it satisfies (and puts those that don't satisfy any into their own category).
(The documentation for elemIndex is here.)
The first function will return a list of "uppdated" lists and the second function will go through the whole list and for each value uppdate the list
myfilter :: a -> [a -> Bool] -> [[a]] -> [[a]]
myfilter _ [] [] = []
myfilter x f:fs l:ls | f x = (x:l): Myfilter x fs ls
| otherwise = l:Myfilter x fs ls
filterall :: [a] -> [a -> Bool] -> [[a]] -> [[a]]
filterall [] _ l = l
filterall x:xs fl l:ls = filterall xs fl (myfilter x fl l)
This should be called with filterall xs [condition1,condition2...] [[],[]...]
okay, this is probably going to be in the prelude, but: is there a standard library function for finding the unique elements in a list? my (re)implementation, for clarification, is:
has :: (Eq a) => [a] -> a -> Bool
has [] _ = False
has (x:xs) a
| x == a = True
| otherwise = has xs a
unique :: (Eq a) => [a] -> [a]
unique [] = []
unique (x:xs)
| has xs x = unique xs
| otherwise = x : unique xs
I searched for (Eq a) => [a] -> [a] on Hoogle.
First result was nub (remove duplicate elements from a list).
Hoogle is awesome.
The nub function from Data.List (no, it's actually not in the Prelude) definitely does something like what you want, but it is not quite the same as your unique function. They both preserve the original order of the elements, but unique retains the last
occurrence of each element, while nub retains the first occurrence.
You can do this to make nub act exactly like unique, if that's important (though I have a feeling it's not):
unique = reverse . nub . reverse
Also, nub is only good for small lists.
Its complexity is quadratic, so it starts to get slow if your list can contain hundreds of elements.
If you limit your types to types having an Ord instance, you can make it scale better.
This variation on nub still preserves the order of the list elements, but its complexity is O(n * log n):
import qualified Data.Set as Set
nubOrd :: Ord a => [a] -> [a]
nubOrd xs = go Set.empty xs where
go s (x:xs)
| x `Set.member` s = go s xs
| otherwise = x : go (Set.insert x s) xs
go _ _ = []
In fact, it has been proposed to add nubOrd to Data.Set.
import Data.Set (toList, fromList)
uniquify lst = toList $ fromList lst
I think that unique should return a list of elements that only appear once in the original list; that is, any elements of the orginal list that appear more than once should not be included in the result.
May I suggest an alternative definition, unique_alt:
unique_alt :: [Int] -> [Int]
unique_alt [] = []
unique_alt (x:xs)
| elem x ( unique_alt xs ) = [ y | y <- ( unique_alt xs ), y /= x ]
| otherwise = x : ( unique_alt xs )
Here are some examples that highlight the differences between unique_alt and unqiue:
unique [1,2,1] = [2,1]
unique_alt [1,2,1] = [2]
unique [1,2,1,2] = [1,2]
unique_alt [1,2,1,2] = []
unique [4,2,1,3,2,3] = [4,1,2,3]
unique_alt [4,2,1,3,2,3] = [4,1]
I think this would do it.
unique [] = []
unique (x:xs) = x:unique (filter ((/=) x) xs)
Another way to remove duplicates:
unique :: [Int] -> [Int]
unique xs = [x | (x,y) <- zip xs [0..], x `notElem` (take y xs)]
Algorithm in Haskell to create a unique list:
data Foo = Foo { id_ :: Int
, name_ :: String
} deriving (Show)
alldata = [ Foo 1 "Name"
, Foo 2 "Name"
, Foo 3 "Karl"
, Foo 4 "Karl"
, Foo 5 "Karl"
, Foo 7 "Tim"
, Foo 8 "Tim"
, Foo 9 "Gaby"
, Foo 9 "Name"
]
isolate :: [Foo] -> [Foo]
isolate [] = []
isolate (x:xs) = (fst f) : isolate (snd f)
where
f = foldl helper (x,[]) xs
helper (a,b) y = if name_ x == name_ y
then if id_ x >= id_ y
then (x,b)
else (y,b)
else (a,y:b)
main :: IO ()
main = mapM_ (putStrLn . show) (isolate alldata)
Output:
Foo {id_ = 9, name_ = "Name"}
Foo {id_ = 9, name_ = "Gaby"}
Foo {id_ = 5, name_ = "Karl"}
Foo {id_ = 8, name_ = "Tim"}
A library-based solution:
We can use that style of Haskell programming where all looping and recursion activities are pushed out of user code and into suitable library functions. Said library functions are often optimized in ways that are way beyond the skills of a Haskell beginner.
A way to decompose the problem into two passes goes like this:
produce a second list that is parallel to the input list, but with duplicate elements suitably marked
eliminate elements marked as duplicates from that second list
For the first step, duplicate elements don't need a value at all, so we can use [Maybe a] as the type of the second list. So we need a function of type:
pass1 :: Eq a => [a] -> [Maybe a]
Function pass1 is an example of stateful list traversal where the state is the list (or set) of distinct elements seen so far. For this sort of problem, the library provides the mapAccumL :: (s -> a -> (s, b)) -> s -> [a] -> (s, [b]) function.
Here the mapAccumL function requires, besides the initial state and the input list, a step function argument, of type s -> a -> (s, Maybe a).
If the current element x is not a duplicate, the output of the step function is Just x and x gets added to the current state. If x is a duplicate, the output of the step function is Nothing, and the state is passed unchanged.
Testing under the ghci interpreter:
$ ghci
GHCi, version 8.8.4: https://www.haskell.org/ghc/ :? for help
λ>
λ> stepFn s x = if (elem x s) then (s, Nothing) else (x:s, Just x)
λ>
λ> import Data.List(mapAccumL)
λ>
λ> pass1 xs = mapAccumL stepFn [] xs
λ>
λ> xs2 = snd $ pass1 "abacrba"
λ> xs2
[Just 'a', Just 'b', Nothing, Just 'c', Just 'r', Nothing, Nothing]
λ>
Writing a pass2 function is even easier. To filter out Nothing non-values, we could use:
import Data.Maybe( fromJust, isJust)
pass2 = (map fromJust) . (filter isJust)
but why bother at all ? - as this is precisely what the catMaybes library function does.
λ>
λ> import Data.Maybe(catMaybes)
λ>
λ> catMaybes xs2
"abcr"
λ>
Putting it all together:
Overall, the source code can be written as:
import Data.Maybe(catMaybes)
import Data.List(mapAccumL)
uniques :: (Eq a) => [a] -> [a]
uniques = let stepFn s x = if (elem x s) then (s, Nothing) else (x:s, Just x)
in catMaybes . snd . mapAccumL stepFn []
This code is reasonably compatible with infinite lists, something occasionally referred to as being “laziness-friendly”:
λ>
λ> take 5 $ uniques $ "abacrba" ++ (cycle "abcrf")
"abcrf"
λ>
Efficiency note:
If we anticipate that it is possible to find many distinct elements in the input list and we can have an Ord a instance, the state can be implemented as a Set object rather than a plain list, this without having to alter the overall structure of the solution.
Here's a solution that uses only Prelude functions:
uniqueList theList =
if not (null theList)
then head theList : filter (/= head theList) (uniqueList (tail theList))
else []
I'm assuming this is equivalent to running two or three nested "for" loops (running through each element, then running through each element again to check for other elements with the same value, then removing those other elements) so I'd estimate this is O(n^2) or O(n^3)
Might even be better than reversing a list, nubbing it, then reversing it again, depending on your circumstances.