Given a list of tuples like this:
dic = [(1,"aa"),(1,"cc"),(2,"aa"),(3,"ff"),(3,"gg"),(1,"bb")]
How to group items of dic resulting in a list grp where,
grp = [(1,["aa","bb","cc"]), (2, ["aa"]), (3, ["ff","gg"])]
I'm actually a newcomer to Haskell...and seems to be falling in love with it..
Using group or groupBy in Data.List will only group similar adjacent items in a list.
I wrote an inefficient function for this, but it results in memory failures as I need to process a very large coded string list. Hope you would help me find a more efficient way.
Whenever possible, reuse library code.
import Data.Map
sortAndGroup assocs = fromListWith (++) [(k, [v]) | (k, v) <- assocs]
Try it out in ghci:
*Main> sortAndGroup [(1,"aa"),(1,"cc"),(2,"aa"),(3,"ff"),(3,"gg"),(1,"bb")]
fromList [(1,["bb","cc","aa"]),(2,["aa"]),(3,["gg","ff"])]
EDIT In the comments, some folks are worried about whether (++) or flip (++) is the right choice. The documentation doesn't say which way things get associated; you can find out by experimenting, or you can sidestep the whole issue using difference lists:
sortAndGroup assocs = ($[]) <$> fromListWith (.) [(k, (v:)) | (k, v) <- assocs]
-- OR
sortAndGroup = fmap ($[]) . M.fromListWith (.) . map (fmap (:))
These alternatives are about the same length as the original, but they're a bit less readable to me.
Here's my solution:
import Data.Function (on)
import Data.List (sortBy, groupBy)
import Data.Ord (comparing)
myGroup :: (Eq a, Ord a) => [(a, b)] -> [(a, [b])]
myGroup = map (\l -> (fst . head $ l, map snd l)) . groupBy ((==) `on` fst)
. sortBy (comparing fst)
This works by first sorting the list with sortBy:
[(1,"aa"),(1,"cc"),(2,"aa"),(3,"ff"),(3,"gg"),(1,"bb")]
=> [(1,"aa"),(1,"bb"),(1,"cc"),(2,"aa"),(3,"ff"),(3,"gg")]
then grouping the list elements by the associated key with groupBy:
[(1,"aa"),(1,"bb"),(1,"cc"),(2,"aa"),(3,"ff"),(3,"gg")]
=> [[(1,"aa"),(1,"bb"),(1,"cc")],[(2,"aa")],[(3,"ff"),(3,"gg")]]
and then transforming the grouped items to tuples with map:
[[(1,"aa"),(1,"bb"),(1,"cc")],[(2,"aa")],[(3,"ff"),(3,"gg")]]
=> [(1,["aa","bb","cc"]), (2, ["aa"]), (3, ["ff","gg"])]`)
Testing:
> myGroup dic
[(1,["aa","bb","cc"]),(2,["aa"]),(3,["ff","gg"])]
Also you can use TransformListComp extension, for example:
Prelude> :set -XTransformListComp
Prelude> import GHC.Exts (groupWith, the)
Prelude GHC.Exts> let dic = [ (1, "aa"), (1, "bb"), (1, "cc") , (2, "aa"), (3, "ff"), (3, "gg")]
Prelude GHC.Exts> [(the key, value) | (key, value) <- dic, then group by key using groupWith]
[(1,["aa","bb","cc"]),(2,["aa"]),(3,["ff","gg"])]
If the list is not sorted on the first element, I don't think you can do better than O(nlog(n)).
One simple way would be to just sort and then use anything from the answer of second part.
You can use from Data.Map a map like Map k [a] to use first element of tuple as key and keep on adding to the values.
You can write your own complex function, which even after you all the attempts will still take O(nlog(n)).
If list is sorted on the first element as is the case in your example, then the task is trivial for something like groupBy as given in the answer by #Mikhail or use foldr and there are numerous other ways.
An example of using foldr is here:
grp :: Eq a => [(a,b)] -> [(a,[b])]
grp = foldr f []
where
f (z,s) [] = [(z,[s])]
f (z,s) a#((x,y):xs) | x == z = (x,s:y):xs
| otherwise = (z,[s]):a
{-# LANGUAGE TransformListComp #-}
import GHC.Exts
import Data.List
import Data.Function (on)
process :: [(Integer, String)] -> [(Integer, [String])]
process list = [(the a, b) | let info = [ (x, y) | (x, y) <- list, then sortWith by y ], (a, b) <- info, then group by a using groupWith]
Related
I'm quite new to Haskell and i'm not sure how to solve / approach this problem: I want a function with the type signature: [((Double, Double), Bool)] -> [[(Double, Double)]]. The function should only add (Double, Double) to the list of lists if Bool == True. If the bool is False, I want the (Double, Double) associated with the next True bool to be added to a new list. Consecutive (Double, Double)'s paired with Bool == True should be added to the same list. For example, an input of: [((1,1),True),((2,2), False),((3,3), False),((4,4),True),((5,5),True)] should return [[(1,1)],[(4,4),(5,5)]]. I did a little research and it seems like the groupBy function might be useful in solving the problem, but I'm not quite sure how to use it properly. As I'm new new to Haskell I'd prefer a more simple solution or explanation but any suggestions would help.
So far my code just creates a new list for every (Double, Double) associated with True bool. I'm not quite sure how to add to an existing list within a list.
consecTrue :: [((Double, Double),Bool)] -> [[(Double,Double)]]
consecTrue xs = case xs of
[] -> []
x:xs
|snd x == True -> [fst x]:consecTrue xs
|snd x == False -> consecTrue xs
Yes, groupBy can be used. You will need the comparison function to feed groupBy, let's call it grf. Easiest path is probably to test the solution gradually within the interactive command, ghci.
$ ghci
Prelude>
Prelude Data.List> import Data.List
Prelude Data.List> grf ((x1,y1),p1) ((x2,y2),p2) = (p1==p2)
Prelude Data.List> let lsa = [((1,1),True),((2,2), False),((3,3), False), ((4,4),True),((5,5),True)]
Prelude Data.List>
Prelude Data.List> lsb = groupBy grf lsa
Prelude Data.List> lsb
[[((1,1),True)],[((2,2),False),((3,3),False)],[((4,4),True),((5,5),True)]]
Prelude Data.List>
That's just a start. Then you need to get rid of the false ones, and then to get rid of the boolean values themselves.
Prelude Data.List>
Prelude Data.List> lsc = filter (snd . head) lsb
Prelude Data.List> lsc
[[((1,1),True)],[((4,4),True),((5,5),True)]]
Prelude Data.List>
Prelude Data.List> lsd = map (map fst) lsc
Prelude Data.List> lsd
[[(1,1)],[(4,4),(5,5)]]
Prelude Data.List>
Prelude Data.List>
Putting it all together:
import Data.List
consecTrue :: [((Double, Double),Bool)] -> [[(Double,Double)]]
consecTrue xs = let grf ((x1,y1),p1) ((x2,y2),p2) = (p1==p2)
in map (map fst) (filter (snd . head) (groupBy grf xs))
main = do
let lsa = [((1,1),True),((2,2), False),((3,3), False),
((4,4),True),((5,5),True)]
let res = consecTrue lsa
putStrLn $ "input = " ++ show lsa
putStrLn $ "output = " ++ show res
That seems to do what you wanted:
input = [((1.0,1.0),True),((2.0,2.0),False),((3.0,3.0),False),((4.0,4.0),True),((5.0,5.0),True)]
output = [[(1.0,1.0)],[(4.0,4.0),(5.0,5.0)]]
As a newcomer learning Haskell, not even yet knowing what map is, it makes total sense to have been asked to do this first of all by simple, direct recursion, creating a definition which stands on its own, not using any library functions.
And definition by simple direct recursion is just enumeration of possible cases we're presented with:
consecTrue :: [(t, Bool)] -> [[t]]
-- empty list:
consecTrue [] = []
-- list with exactly one entry in it:
consecTrue [(a,True)] = [[a]]
consecTrue [(a,False)] = []
-- list with two or more entries in it:
consecTrue ((a1,True) : (a2, True) : more2) = (a1:r):q where
(r:q) = consecTrue ((a2,t2) : more2)
consecTrue ((a1,True) : (a2,False) : more2) = [a1] : consecTrue more2
consecTrue ((a1,False) : more1) = consecTrue more1
The (Double, Double) is immaterial, extraneous detail. Just t is enough, meaning anything can go there.
Say for example that I have a list like this
list = ["AC", "BA"]
I would like to add every unique combination of this list to a tuple so the result is like this:
[("AC", "AC"),("AC","BA"),("BA", "BA")]
where ("BA","AC") is excluded.
My first approach was to use a list comprehension like this:
ya = [(x,y) | x <- list, y <- list]
But I couldn't manage to get it to work, is there anyway to achieve my result by using list comprehensions?
My preferred solution uses a list comprehension
f :: [t] -> [(t, t)]
f list = [ (a,b) | theTail#(a:_) <- tails list , b <- theTail ]
I find this to be quite readable: first you choose (non-deterministically) a suffix theTail, starting with a, and then you choose (non-deterministically) an element b of the suffix. Finally, the pair (a,b) is produced, which clearly ranges over the wanted pairs.
It should also be optimally efficient: every time you demand an element from it, that is produced in constant time.
ThreeFx's answer will work, but it adds the constraint that you elements must be orderable. Instead, you can get away with functions in Prelude and Data.List to implement this more efficiently and more generically:
import Data.List (tails)
permutations2 :: [a] -> [(a, a)]
permutations2 list
= concat
$ zipWith (zip . repeat) list
$ tails list
It doesn't use list comprehensions, but it works without having to perform potentially expensive comparisons and without any constraints on what kind of values you can put through it.
To see how this works, consider that if you had the list [1, 2, 3], you'd have the groups
[(1, 1), (1, 2), (1, 3),
(2, 2), (2, 3),
(3, 3)]
This is equivalent to
[(1, [1, 2, 3]),
(2, [2, 3]),
(3, [3])]
since it doesn't contain any extra or any less information. The transformation from this form to our desired output is to map the function f (x, ys) = map (\y -> (x, y)) ys over each tuple, then concat them together. Now we just need to figure out how to get the second element of those tuples. Quite clearly, we see that all its doing is dropping successive elements off the front of the list. Luckily, this is already implemented for us by the tails function in Data.List. The first element in each of these tuples is just makes up the original list, so we know we can use a zip. Initially, you could implement this with
> concatMap (\(x, ys) -> map (\y -> (x, y)) ys) $ zip list $ tails list
But I personally prefer zips, so I'd turn the inner function into one that doesn't use lambdas more than necessary:
> concatMap (\(x, ys) -> zip (repeat x) ys) $ zip list $ tails list
And since I prefer zipWith f over map (uncurry f) . zip, I'd turn this into
> concat $ zipWith (\x ys -> zip (repeat x) ys) list $ tails list
Now, we can reduce this further:
> concat $ zipWith (\x -> zip (repeat x)) list $ tails list
> concat $ zipWith (zip . repeat) list $ tails list
thanks the eta-reduction and function composition. We could make this entirely pointfree where
> permutations2 = concat . ap (zipWith (zip . repeat)) tails
But I find this pretty hard to read and understand, so I think I'll stick with the previous version.
Just use a list comprehension:
f :: (Ord a) => [a] -> [(a, a)]
f list = [ (a, b) | a <- list, b <- list, a <= b ]
Since Haskell's String is in the Ord typeclass, which means it can be ordered, you first tell Haskell to get all possible combinations and then exclude every combination where b is greater than a which removes all "duplicate" combinations.
Example output:
> f [1,2,3,4]
[(1,1),(1,2),(1,3),(1,4),(2,2),(2,3),(2,4),(3,3),(3,4),(4,4)]
I'm trying to write a function that given a list of numbers, returns a list where every 2nd number is doubled in value, starting from the last element. So if the list elements are 1..n, n-th is going to be left as-is, (n-1)-th is going to be doubled in value, (n-2)-th is going to be left as-is, etc.
So here's how I solved it:
MyFunc :: [Integer] -> [Integer]
MyFunc xs = reverse (MyFuncHelper (reverse xs))
MyFuncHelper :: [Integer] -> [Integer]
MyFuncHelper [] = []
MyFuncHelper (x:[]) = [x]
MyFuncHelper (x:y:zs) = [x,y*2] ++ MyFuncHelper zs
And it works:
MyFunc [1,1,1,1] = [2,1,2,1]
MyFunc [1,1,1] = [1,2,1]
However, I can't help but think there has to be a simpler solution than reversing the list, processing it and then reversing it again. Could I simply iterate the list backwards? If yes, how?
The under reversed f xs idiom from the lens library will apply f to xs in reverse order:
under reversed (take 5) [1..100] => [96,97,98,99,100]
When you need to process the list from the end, usually foldr works pretty well. Here is a solution for you without reversing the whole list twice:
doubleOdd :: Num a => [a] -> [a]
doubleOdd = fst . foldr multiplyCond ([], False)
where multiplyCond x (rest, flag) = ((if flag then (x * 2) else x) : rest, not flag)
The multiplyCond function takes a tuple with a flag and the accumulator list. The flag constantly toggles on and off to track whether we should multiply the element or not. The accumulator list simply gathers the resulting numbers. This solution may be not so concise, but avoids extra work and doesn't use anything but prelude functions.
myFunc = reverse
. map (\(b,x) -> if b then x*2 else x)
. zip (cycle [False,True])
. reverse
But this isn't much better. Your implementation is sufficiently elegant.
The simplest way to iterate the list backwards is to reverse the list. I don't think you can really do much better than that; I suspect that if you have to traverse the whole list to find the end, and remember how to get back up, you might as well just reverse it. If this is a big deal, maybe you should be using some other data structure instead of lists—Vector or Seq might be good choices.
Another way to write your helper function is to use Traversable:
import Control.Monad.State
import Data.Traversable (Traversable, traverse)
toggle :: (Bool -> a -> b) -> a -> State Bool b
toggle f a =
do active <- get
put (not active)
return (f active a)
doubleEvens :: (Num a, Traversable t) => t a -> t a
doubleEvens xs = evalState (traverse (toggle step) xs) False
where step True x = 2*x
step False x = x
yourFunc :: Num a => [a] -> [a]
yourFunc = reverse . doubleEvens
Or if we go a bit crazy with Foldable and Traversable, we can try this:
Use Foldable's foldl to extract a reverse-order list from any of its instances. For some types this will be more efficient than reversing a list.
Then we can use traverse and State to map each element of the original structure to its counterpart in the reversed order.
Here's how to do it:
import Control.Monad.State
import Data.Foldable (Foldable)
import qualified Data.Foldable as F
import Data.Traversable (Traversable, traverse)
import Data.Map (Map)
import qualified Data.Map as Map
toReversedList :: Foldable t => t a -> [a]
toReversedList = F.foldl (flip (:)) []
reverse' :: Traversable t => t a -> t a
reverse' ta = evalState (traverse step ta) (toReversedList ta)
where step _ = do (h:t) <- get
put t
return h
yourFunc' :: (Traversable t, Num a) => t a -> t a
yourFunc' = reverse' . doubleEvens
-- >>> yourFunc' $ Map.fromList [(1, 1), (2, 1), (3, 1), (4, 1)]
-- fromList [(1,2),(2,1),(3,2),(4,1)]
-- >>> yourFunc' $ Map.fromList [(1, 1), (2, 1), (3, 1)]
-- fromList [(1,1),(2,2),(3,1)]
There's probably a better way to do this, though...
func xs = zipWith (*) xs $ reverse . (take $ length xs) $ cycle [1,2]
So i've been praticing Haskell, and i was doing just fine, until i got stuck in this exercise. Basically i want a function that receives a list like this :
xs = [("a","b"),("a","c"),("b","e")]
returns something like this :
xs = [("a",["b","c"]), ("b",["e"])].
I come up with this code:
list xs = [(a,[b])|(a,b) <- xs]
but the problem is that this doesn't do what i want. i guess it's close, but not right.
Here's what this returns:
xs = [("a",["b"]),("a",["c"]),("b",["e"])]
If you don't care about the order of the tuples in the final list, the most efficient way (that doesn't reinvent the wheel) would be to make use of the Map type from Data.Map in the containers package:
import Data.Map as Map
clump :: Ord a => [(a,b)] -> [(a, [b])]
clump xs = Map.toList $ Map.fromListWith (flip (++)) [(a, [b]) | (a,b) <- xs]
main = do print $ clump [("a","b"),("a","c"),("b","e")]
If you do care about the result order, you'll probably have to do something ugly and O(n^2) like this:
import Data.List (nub)
clump' :: Eq a => [(a,b)] -> [(a, [b])]
clump' xs = [(a, [b | (a', b) <- xs, a' == a]) | a <- nub $ map fst xs]
main = do print $ clump' [("a","b"),("a","c"),("b","e")]
You could use right fold with Data.Map.insertWith:
import Data.Map as M hiding (foldr)
main :: IO ()
main = print . M.toList
$ foldr (\(k, v) m -> M.insertWith (++) k [v] m)
M.empty
[("a","b"),("a","c"),("b","e")]
Output:
./main
[("a",["b","c"]),("b",["e"])]
The basic principle is that you want to group "similar" elements together.
Whenever you want to group elements together, you have the group functions in Data.List. In this case, you want to specify yourself what counts as similar, so you will need to use the groupBy version. Most functions in Data.List have a By-version that lets you specify more in detail what you want.
Step 1
In your case, you want to define "similarity" as "having the same first element". In Haskell, "having the same first element on a pair" means
(==) `on` fst
In other words, equality on the first element of a pair.
So to do the grouping, we supply that requirement to groupBy, like so:
groupBy ((==) `on` fst) xs
This will get us back, in your example, the two groups:
[[("a","b"),("a","c")]
,[("b","e")]]
Step 2
Now what remains is turning those lists into pairs. The basic principle behind that is, if we let
ys = [("a","b"),("a","c")]
as an example, to take the first element of the first pair, and then just smash the second element of all pairs together into a list. Taking the first element of the first pair is easy!
fst (head ys) == "a"
Taking all the second elements is fairly easy as well!
map snd ys == ["b", "c"]
Both of these operations together give us what we want.
(fst (head ys), map snd ys) == ("a", ["b", "c"])
Finished product
So if you want to, you can write your clumping function as
clump xs = (fst (head ys), map snd ys)
where ys = groupBy ((==) `on` fst) xs
okay, this is probably going to be in the prelude, but: is there a standard library function for finding the unique elements in a list? my (re)implementation, for clarification, is:
has :: (Eq a) => [a] -> a -> Bool
has [] _ = False
has (x:xs) a
| x == a = True
| otherwise = has xs a
unique :: (Eq a) => [a] -> [a]
unique [] = []
unique (x:xs)
| has xs x = unique xs
| otherwise = x : unique xs
I searched for (Eq a) => [a] -> [a] on Hoogle.
First result was nub (remove duplicate elements from a list).
Hoogle is awesome.
The nub function from Data.List (no, it's actually not in the Prelude) definitely does something like what you want, but it is not quite the same as your unique function. They both preserve the original order of the elements, but unique retains the last
occurrence of each element, while nub retains the first occurrence.
You can do this to make nub act exactly like unique, if that's important (though I have a feeling it's not):
unique = reverse . nub . reverse
Also, nub is only good for small lists.
Its complexity is quadratic, so it starts to get slow if your list can contain hundreds of elements.
If you limit your types to types having an Ord instance, you can make it scale better.
This variation on nub still preserves the order of the list elements, but its complexity is O(n * log n):
import qualified Data.Set as Set
nubOrd :: Ord a => [a] -> [a]
nubOrd xs = go Set.empty xs where
go s (x:xs)
| x `Set.member` s = go s xs
| otherwise = x : go (Set.insert x s) xs
go _ _ = []
In fact, it has been proposed to add nubOrd to Data.Set.
import Data.Set (toList, fromList)
uniquify lst = toList $ fromList lst
I think that unique should return a list of elements that only appear once in the original list; that is, any elements of the orginal list that appear more than once should not be included in the result.
May I suggest an alternative definition, unique_alt:
unique_alt :: [Int] -> [Int]
unique_alt [] = []
unique_alt (x:xs)
| elem x ( unique_alt xs ) = [ y | y <- ( unique_alt xs ), y /= x ]
| otherwise = x : ( unique_alt xs )
Here are some examples that highlight the differences between unique_alt and unqiue:
unique [1,2,1] = [2,1]
unique_alt [1,2,1] = [2]
unique [1,2,1,2] = [1,2]
unique_alt [1,2,1,2] = []
unique [4,2,1,3,2,3] = [4,1,2,3]
unique_alt [4,2,1,3,2,3] = [4,1]
I think this would do it.
unique [] = []
unique (x:xs) = x:unique (filter ((/=) x) xs)
Another way to remove duplicates:
unique :: [Int] -> [Int]
unique xs = [x | (x,y) <- zip xs [0..], x `notElem` (take y xs)]
Algorithm in Haskell to create a unique list:
data Foo = Foo { id_ :: Int
, name_ :: String
} deriving (Show)
alldata = [ Foo 1 "Name"
, Foo 2 "Name"
, Foo 3 "Karl"
, Foo 4 "Karl"
, Foo 5 "Karl"
, Foo 7 "Tim"
, Foo 8 "Tim"
, Foo 9 "Gaby"
, Foo 9 "Name"
]
isolate :: [Foo] -> [Foo]
isolate [] = []
isolate (x:xs) = (fst f) : isolate (snd f)
where
f = foldl helper (x,[]) xs
helper (a,b) y = if name_ x == name_ y
then if id_ x >= id_ y
then (x,b)
else (y,b)
else (a,y:b)
main :: IO ()
main = mapM_ (putStrLn . show) (isolate alldata)
Output:
Foo {id_ = 9, name_ = "Name"}
Foo {id_ = 9, name_ = "Gaby"}
Foo {id_ = 5, name_ = "Karl"}
Foo {id_ = 8, name_ = "Tim"}
A library-based solution:
We can use that style of Haskell programming where all looping and recursion activities are pushed out of user code and into suitable library functions. Said library functions are often optimized in ways that are way beyond the skills of a Haskell beginner.
A way to decompose the problem into two passes goes like this:
produce a second list that is parallel to the input list, but with duplicate elements suitably marked
eliminate elements marked as duplicates from that second list
For the first step, duplicate elements don't need a value at all, so we can use [Maybe a] as the type of the second list. So we need a function of type:
pass1 :: Eq a => [a] -> [Maybe a]
Function pass1 is an example of stateful list traversal where the state is the list (or set) of distinct elements seen so far. For this sort of problem, the library provides the mapAccumL :: (s -> a -> (s, b)) -> s -> [a] -> (s, [b]) function.
Here the mapAccumL function requires, besides the initial state and the input list, a step function argument, of type s -> a -> (s, Maybe a).
If the current element x is not a duplicate, the output of the step function is Just x and x gets added to the current state. If x is a duplicate, the output of the step function is Nothing, and the state is passed unchanged.
Testing under the ghci interpreter:
$ ghci
GHCi, version 8.8.4: https://www.haskell.org/ghc/ :? for help
λ>
λ> stepFn s x = if (elem x s) then (s, Nothing) else (x:s, Just x)
λ>
λ> import Data.List(mapAccumL)
λ>
λ> pass1 xs = mapAccumL stepFn [] xs
λ>
λ> xs2 = snd $ pass1 "abacrba"
λ> xs2
[Just 'a', Just 'b', Nothing, Just 'c', Just 'r', Nothing, Nothing]
λ>
Writing a pass2 function is even easier. To filter out Nothing non-values, we could use:
import Data.Maybe( fromJust, isJust)
pass2 = (map fromJust) . (filter isJust)
but why bother at all ? - as this is precisely what the catMaybes library function does.
λ>
λ> import Data.Maybe(catMaybes)
λ>
λ> catMaybes xs2
"abcr"
λ>
Putting it all together:
Overall, the source code can be written as:
import Data.Maybe(catMaybes)
import Data.List(mapAccumL)
uniques :: (Eq a) => [a] -> [a]
uniques = let stepFn s x = if (elem x s) then (s, Nothing) else (x:s, Just x)
in catMaybes . snd . mapAccumL stepFn []
This code is reasonably compatible with infinite lists, something occasionally referred to as being “laziness-friendly”:
λ>
λ> take 5 $ uniques $ "abacrba" ++ (cycle "abcrf")
"abcrf"
λ>
Efficiency note:
If we anticipate that it is possible to find many distinct elements in the input list and we can have an Ord a instance, the state can be implemented as a Set object rather than a plain list, this without having to alter the overall structure of the solution.
Here's a solution that uses only Prelude functions:
uniqueList theList =
if not (null theList)
then head theList : filter (/= head theList) (uniqueList (tail theList))
else []
I'm assuming this is equivalent to running two or three nested "for" loops (running through each element, then running through each element again to check for other elements with the same value, then removing those other elements) so I'd estimate this is O(n^2) or O(n^3)
Might even be better than reversing a list, nubbing it, then reversing it again, depending on your circumstances.