If I have two lists, I want to define a positional equality (in a particular sense) between elements. For example if:
k = [[3,1,2,4],[1,4,2,3],[1,3,4,2]]
s = [["a","b","c","d"],["d","a","c","b"],["c","b","a","d"],["d","b","c","a"]]
and I want to be able to say 2 ∼ "c" to a function and return all tuples where 2 and c share the same position in the list.
res= [([3,1,2,4],["a","b","c","d"])
,([3,1,2,4],["d","a","c","b"])
,([3,1,2,4],["d","b","c","a"])
,([1,4,2,3],["a","b","c","d"])
,([1,4,2,3],["d","a","c","b"])
,([1,4,2,3],["d","b","c","a"])
]
Something like this would be a matter of two loops in some other language, but I have spent the better part of a day trying to write this function in Haskell. My current attempt:
eqElem i1 (l1:ls1) i2 (l2:ls2) = helper1 i1 l1 i2 l2 0 where
helper1 i1 (p:ps) i2 l2 ctr1
| i1 == p = helper2 i2 l2 ctr1 0
| otherwise = helper1 i1 ps i2 l2 (ctr1+1)
helper2 i2 (p:ps) ctr1 ctr2
| i2==p && ctr1==ctr2 = (l1,l2):(eqElem i1 (l1:ls1) i2 ls2)
| otherwise = helper2 i2 ps ctr1 (ctr2+1)
helper2 i2 [] ctr1 ctr2 = eqElem i1 ls1 i2 (l2:ls2)
eqElem i1 [] i2 _ = []
Right now this gives:
*Main Lib> eqElem 2 k "c" s
[([3,1,2,4],["a","b","c","d"]),([3,1,2,4],["d","a","c","b"])]
which is only about half right; I can probably get it right if I keep at it but I just want to make sure that I am not reinventing the wheel or something.
So...what is the idiomatic Haskell way to do this? Is there one? I feel like I am forcing Haskell to be imperative and that there must be some higher order function or method that can be used to get this done.
The big issue is that I do not know the lists before hand. They can be of arbitrary data type, of differing lengths, and/or (nested) depths.
They are parsed from user input to a REPL and stored in an ADT that can be at best made a Functor, Monad and Applicative. List comprehension would require Alternative and MonadPlus but can't make those because then Category theory would get mad.
Probably something like this would be very idiomatic (if not super efficient):
import Data.List
eqElem l r lss rss =
[ (ls, rs)
| ls <- lss
, rs <- rss
, findIndex (l==) ls == findIndex (r==) rs
]
In ghci:
> mapM_ print $ eqElem 2 "c" [[3,1,2,4],[1,4,2,3],[1,3,4,2]] [["a","b","c","d"],["d","a","c","b"],["c","b","a","d"],["d","b","c","a"]]
([3,1,2,4],["a","b","c","d"])
([3,1,2,4],["d","a","c","b"])
([3,1,2,4],["d","b","c","a"])
([1,4,2,3],["a","b","c","d"])
([1,4,2,3],["d","a","c","b"])
([1,4,2,3],["d","b","c","a"])
This has two efficiency problems: 1. it recomputes the location of the input elements in the input lists repeatedly, and 2. it iterates over all pairs of input lists. So this way is O(mnp) where m is the length of lss, n is the length of rss, and p is the length of the longest element of lss or rss. A more efficient version (which only calls findIndex once per input list, and iterates over many fewer pairs of lists; O(mn+mp+np+m log(m)+n log(n))) would look like this:
import Control.Applicative
import qualified Data.Map as M
eqElem l r lss rss
= concat . M.elems
$ M.intersectionWith (liftA2 (,)) (index l lss) (index r rss)
where
index v vss = M.fromListWith (++) [(findIndex (v==) vs, [vs]) | vs <- vss]
The basic idea is to build up Maps which tell which input lists have the given elements at which positions. Then the intersection of these two maps line up input lists that have the given elements at the same positions, so we can just take the Cartesian product of the values there with liftA2 (,).
Again in ghci:
> mapM_ print $ eqElem 2 "c" [[3,1,2,4],[1,4,2,3],[1,3,4,2]] [["a","b","c","d"],["d","a","c","b"],["c","b","a","d"],["d","b","c","a"]]
([1,4,2,3],["d","b","c","a"])
([1,4,2,3],["d","a","c","b"])
([1,4,2,3],["a","b","c","d"])
([3,1,2,4],["d","b","c","a"])
([3,1,2,4],["d","a","c","b"])
([3,1,2,4],["a","b","c","d"])
Something like this would be a matter of two loops in some other language
List comprehension then, quite straightforward in fact:
eqElem a b ass bss =
[ (as,bs) | as <- ass, bs <- bss, any (==(a,b)) $ zip as bs ]
Reading: For every sublist as from ass and (nested) for every sublist bs in bss check if when as and bs are zip-ed together there is any tuple which equals to (a,b) then include (as,bs) into result.
This should handle a situation when sublists contain duplicated elements.
Related
I need to generate infinite sorted list of all coprimes.
The first element in each pair must be less than the second.
The sorting must be done in ascending order -- by the sum of pair's elements; and if two sums are equal, then by the pair's first element.
So, the resulting list must be
[(2,3),(2,5),(3,4),(3,5),(2,7),(4,5),(3,7),(2,9),(3,8),(4,7)...`
Here's my solution.
coprimes :: [(Int, Int)]
coprimes = sortBy (\t1 t2 -> if uncurry (+) t1 <= uncurry (+) t2 then LT else GT) $ helper [2..]
where helper xs = [(x,y) | x <- xs, y <- xs, x < y, gcd x y == 1]
The problem is that I can't take n first pairs. I realize that sorting can't be done on infinite lists.
But how can I generate the same sequence in a lazy way?
While probably not the most optimal way it should works if you first generate all possible pairs and then filter them.
So using your criteria:
pairs :: [(Integer,Integer)]
pairs = [ (i,l-i) | l <- [1..], i <- [1..l-1] ]
coprimes :: [(Integer,Integer)]
coprimes = [ (i,j) | (i,j) <- pairs, 1 < i, i < j,gcd i j == 1]
produces
λ> take 10 coprimes
[(2,3),(2,5),(3,4),(3,5),(2,7),(4,5),(3,7),(2,9),(3,8),(4,7)]
now of course you can put some of the stuff 1 < i and i < j comes to mind into the pairs definition or even join them but I think here it's more obvious what's going on
Here's a possible solution following Chapter 9 of Richard Bird's Thinking Functionally in Haskell:
coprimes = mergeAll $ map coprimes' [2..]
coprimes' n = [(n, m) | m <- [n+1..], gcd m n == 1]
merge (x:xs) (y:ys)
| s x < s y = x:merge xs (y:ys)
| s x == s y = x:y:merge xs ys
| otherwise = y:merge (x:xs) ys
where s (x, y) = x+y
xmerge (x:xs) ys = x:merge xs ys
mergeAll = foldr1 xmerge
And the result is:
> take 10 $ coprimes
[(2,3),(2,5),(3,4),(3,5),(2,7),(4,5),(3,7),(2,9),(3,8),(4,7)]
Note that the natural definition of mergeAll would be foldr1 merge, but this doesn't work because it will try to find the minimum of the first elements of all the list before returning the first element, and hence you end up in an infinite loop. However, since we know that the lists are in ascending order and the minimum is the first element of the first list xmerge does the trick.
Note: this solution appears to be significantly (like 2 order of magnitudes) slower than Carsten "naive" answer. So I advise to avoid this if you are interested in performance. Yet it still is an interesting approach that might be effective in other situations.
As #Bakuriu suggests, merging an infinite list of infinite lists is a solution, but the devil is in the details.
The diagonal function from the universe-base package can do this, so you could write:
import Data.Universe.Helpers
coprimes = diagonal [ go n | n <- [2..] ]
where go n = [ (n,k) | k <- [n+1..], gcd n k == 1 ]
Note - this doesn't satisfy your sorted criteria, but I mention it because the functions in that package are useful to know about, and implementing a function like diagonal correctly is not easy.
If you want to write your own, consider decomposing the infinite grid N x N (where N is the natural numbers) into diagonals:
[ (1,1) ] ++ [ (1,2), (2,1) ] ++ [ (1,3), (2,2), (3,1) ] ++ ...
and filtering this list.
I need to generate infinite sorted list of all coprimes. The first element in each pair must be less than the second. The sorting must be done in ascending order -- by the sum of pair's elements; and if two sums are equal, then by the pair's first element.
So, we generate ascending pairs of sum and first element, and keep only the coprimes. Easy cheesy!
[ (first, second)
| sum <- [3..]
, first <- [2..sum `div` 2]
, let second = sum-first
, gcd first second == 1
]
I have a list of doubles(myList), which I want to add to a new List (someList), but once the new list reaches a set size i.e. 25, I want to stop adding to it. I have tried implementing this function using sum but was unsuccessful. Example code below.
someList = [(a)| a <- myList, sum someList < 30]
The way #DanielFischer phrased the question is compatible with the Haskell way of thinking.
Do you want someList to be the longest prefix of myList that has a sum < 30?
Here's how I'd approach it: let's say our list is
>>> let list = [1..20]
we can find the "cumulative sums" using:
>>> let sums = tail . scanl (+) 0
>>> sums list
[1,3,6,10,15,21,28,36,45,55,66,78,91,105,120,136,153,171,190,210]
Now zip that with the original list to get pairs of elements with the sum up to that point
>>> zip list (sums list)
[(1,1),(2,3),(3,6),(4,10),(5,15),(6,21),(7,28),(8,36),
(9,45),(10,55),(11,66),(12,78),(13,91),(14,105),(15,120),
(16,136),(17,153),(18,171),(19,190),(20,210)]
Then we can takeWhile this list to get the prefix we want:
>>> takeWhile (\x -> snd x < 30) (zip list (sums list))
[(1,1),(2,3),(3,6),(4,10),(5,15),(6,21),(7,28)]
finally we can get rid of the cumulative sums that we used to perform this calculation:
>>> map fst (takeWhile (\x -> snd x < 30) (zip list (sums list)))
[1,2,3,4,5,6,7]
Note that because of laziness, this is as efficient as the recursive solutions -- only the sums up to the point where they fail the test need to be calculated. This can be seen because the solution works on infinite lists (because if we needed to calculate all the sums, we would never finish).
I'd probably abstract this and take the limit as a parameter:
>>> :{
... let initial lim list =
... map fst (takeWhile (\x -> snd x < lim) (zip list (sums list)))
... :}
This function has an obvious property it should satisfy, namely that the sum of a list should always be less than the limit (as long as the limit is greater than 0). So we can use QuickCheck to make sure we did it right:
>>> import Test.QuickCheck
>>> quickCheck (\lim list -> lim > 0 ==> sum (initial lim list) < lim)
+++ OK, passed 100 tests.
someList = makeList myList [] 0 where
makeList (x:xs) ys total = let newTot = total + x
in if newTot >= 25
then ys
else makeList xs (ys ++ [x]) newTot
This takes elements from myList as long as their sum is less than 25.
The logic takes place in makeList. It takes the first element of the input list and adds it to the running total, to see if it's greater than 25. If it is, we shouldn't add it to the output list, and we finish recursing. Otherwise, we put x on the end of the output list (ys) and keep going with the rest of the input list.
The behaviour you want is
ghci> appendWhileUnder 25 [1..5] [1..5]
[1,2,3,4,5,1,2,3]
because that sums to 21 and adding the 4 would bring it to 25.
OK, one way to go about this is by just appending them with ++ then taking the initial segment that's under 25.
appendWhileUnder n xs ys = takeWhileUnder n (xs++ys)
I don't want to keep summing intermediate lists, so I'll keep track with how much I'm allowed (n).
takeWhileUnder n [] = []
takeWhileUnder n (x:xs) | x < n = x:takeWhileUnder (n-x) xs
| otherwise = []
Here I allow x through if it doesn't take me beyond what's left of my allowance.
Possibly undesired side effect: it'll chop out bits of the original list if it sums to over 25. Workaround: use
appendWhileUnder' n xs ys = xs ++ takeWhileUnder (n - sum xs)
which keeps the entire xs whether it brings you over n or not.
What's the most direct/efficient way to create all possibilities of dividing one (even) list into two in Haskell? I toyed with splitting all permutations of the list but that would add many extras - all the instances where each half contains the same elements, just in a different order. For example,
[1,2,3,4] should produce something like:
[ [1,2], [3,4] ]
[ [1,3], [2,4] ]
[ [1,4], [2,3] ]
Edit: thank you for your comments -- the order of elements and the type of the result is less important to me than the concept - an expression of all two-groups from one group, where element order is unimportant.
Here's an implementation, closely following the definition.
The first element always goes into the left group. After that, we add the next head element into one, or the other group. If one of the groups becomes too big, there is no choice anymore and we must add all the rest into the the shorter group.
divide :: [a] -> [([a], [a])]
divide [] = [([],[])]
divide (x:xs) = go ([x],[], xs, 1,length xs) []
where
go (a,b, [], i,j) zs = (a,b) : zs -- i == lengh a - length b
go (a,b, s#(x:xs), i,j) zs -- j == length s
| i >= j = (a,b++s) : zs
| (-i) >= j = (a++s,b) : zs
| otherwise = go (x:a, b, xs, i+1, j-1) $ go (a, x:b, xs, i-1, j-1) zs
This produces
*Main> divide [1,2,3,4]
[([2,1],[3,4]),([3,1],[2,4]),([1,4],[3,2])]
The limitation of having an even length list is unnecessary:
*Main> divide [1,2,3]
[([2,1],[3]),([3,1],[2]),([1],[3,2])]
(the code was re-written in the "difference-list" style for efficiency: go2 A zs == go1 A ++ zs).
edit: How does this work? Imagine yourself sitting at a pile of stones, dividing it into two. You put the first stone to a side, which one it doesn't matter (so, left, say). Then there's a choice where to put each next stone — unless one of the two piles becomes too small by comparison, and we thus must put all the remaining stones there at once.
To find all partitions of a non-empty list (of even length n) into two equal-sized parts, we can, to avoid repetitions, posit that the first element shall be in the first part. Then it remains to find all ways to split the tail of the list into one part of length n/2 - 1 and one of length n/2.
-- not to be exported
splitLen :: Int -> Int -> [a] -> [([a],[a])]
splitLen 0 _ xs = [([],xs)]
splitLen _ _ [] = error "Oops"
splitLen k l ys#(x:xs)
| k == l = [(ys,[])]
| otherwise = [(x:us,vs) | (us,vs) <- splitLen (k-1) (l-1) xs]
++ [(us,x:vs) | (us,vs) <- splitLen k (l-1) xs]
does that splitting if called appropriately. Then
partitions :: [a] -> [([a],[a])]
partitions [] = [([],[])]
partitions (x:xs)
| even len = error "Original list with odd length"
| otherwise = [(x:us,vs) | (us,vs) <- splitLen half len xs]
where
len = length xs
half = len `quot` 2
generates all the partitions without redundantly computing duplicates.
luqui raises a good point. I haven't taken into account the possibility that you'd want to split lists with repeated elements. With those, it gets a little more complicated, but not much. First, we group the list into equal elements (done here for an Ord constraint, for only Eq, that could still be done in O(length²)). The idea is then similar, to avoid repetitions, we posit that the first half contains more elements of the first group than the second (or, if there is an even number in the first group, equally many, and similar restrictions hold for the next group etc.).
repartitions :: Ord a => [a] -> [([a],[a])]
repartitions = map flatten2 . halves . prepare
where
flatten2 (u,v) = (flatten u, flatten v)
prepare :: Ord a => [a] -> [(a,Int)]
prepare = map (\xs -> (head xs, length xs)) . group . sort
halves :: [(a,Int)] -> [([(a,Int)],[(a,Int)])]
halves [] = [([],[])]
halves ((a,k):more)
| odd total = error "Odd number of elements"
| even k = [((a,low):us,(a,low):vs) | (us,vs) <- halves more] ++ [normalise ((a,c):us,(a,k-c):vs) | c <- [low + 1 .. min half k], (us,vs) <- choose (half-c) remaining more]
| otherwise = [normalise ((a,c):us,(a,k-c):vs) | c <- [low + 1 .. min half k], (us,vs) <- choose (half-c) remaining more]
where
remaining = sum $ map snd more
total = k + remaining
half = total `quot` 2
low = k `quot` 2
normalise (u,v) = (nz u, nz v)
nz = filter ((/= 0) . snd)
choose :: Int -> Int -> [(a,Int)] -> [([(a,Int)],[(a,Int)])]
choose 0 _ xs = [([],xs)]
choose _ _ [] = error "Oops"
choose need have ((a,k):more) = [((a,c):us,(a,k-c):vs) | c <- [least .. most], (us,vs) <- choose (need-c) (have-k) more]
where
least = max 0 (need + k - have)
most = min need k
flatten :: [(a,Int)] -> [a]
flatten xs = xs >>= uncurry (flip replicate)
Daniel Fischer's answer is a good way to solve the problem. I offer a worse (more inefficient) way, but one which more obviously (to me) corresponds to the problem description. I will generate all partitions of the list into two equal length sublists, then filter out equivalent ones according to your definition of equivalence. The way I usually solve problems is by starting like this -- create a solution that is as obvious as possible, then gradually transform it into a more efficient one (if necessary).
import Data.List (sort, nubBy, permutations)
type Partition a = ([a],[a])
-- Your notion of equivalence (sort to ignore the order)
equiv :: (Ord a) => Partition a -> Partition a -> Bool
equiv p q = canon p == canon q
where
canon (xs,ys) = sort [sort xs, sort ys]
-- All ordered partitions
partitions :: [a] -> [Partition a]
partitions xs = map (splitAt l) (permutations xs)
where
l = length xs `div` 2
-- All partitions filtered out by the equivalence
equivPartitions :: (Ord a) => [a] -> [Partition a]
equivPartitions = nubBy equiv . partitions
Testing
>>> equivPartitions [1,2,3,4]
[([1,2],[3,4]),([3,2],[1,4]),([3,1],[2,4])]
Note
After using QuickCheck to test the equivalence of this implementation with Daniel's, I found an important difference. Clearly, mine requires an (Ord a) constraint and his does not, and this hints at what the difference would be. In particular, if you give his [0,0,0,0], you will get a list with three copies of ([0,0],[0,0]), whereas mine will give only one copy. Which of these is correct was not specified; Daniel's is natural when considering the two output lists to be ordered sequences (which is what that type is usually considered to be), mine is natural when considering them as sets or bags (which is how this question seemed to be treating them).
Splitting The Difference
It is possible to get from an implementation that requires Ord to one that doesn't, by operating on the positions rather than the values in a list. I came up with this transformation -- an idea which I believe originates with Benjamin Pierce in his work on bidirectional programming.
import Data.Traversable
import Control.Monad.Trans.State
data Labelled a = Labelled { label :: Integer, value :: a }
instance Eq (Labelled a) where
a == b = compare a b == EQ
instance Ord (Labelled a) where
compare a b = compare (label a) (label b)
labels :: (Traversable t) => t a -> t (Labelled a)
labels t = evalState (traverse trav t) 0
where
trav x = state (\i -> i `seq` (Labelled i x, i + 1))
onIndices :: (Traversable t, Functor u)
=> (forall a. Ord a => t a -> u a)
-> forall b. t b -> u b
onIndices f = fmap value . f . labels
Using onIndices on equivPartitions wouldn't speed it up at all, but it would allow it to have the same semantics as Daniel's (up to equiv of the results) without the constraint, and with my more naive and obvious way of expressing it -- and I just thought it was an interesting way to get rid of the constraint.
My own generalized version, added much later, inspired by Will's answer:
import Data.Map (adjust, fromList, toList)
import Data.List (groupBy, sort)
divide xs n evenly = divide' xs (zip [0..] (replicate n [])) where
evenPSize = div (length xs) n
divide' [] result = [result]
divide' (x:xs) result = do
index <- indexes
divide' xs (toList $ adjust (x :) index (fromList result)) where
notEmptyBins = filter (not . null . snd) $ result
partlyFullBins | evenly == "evenly" = map fst . filter ((<evenPSize) . length . snd) $ notEmptyBins
| otherwise = map fst notEmptyBins
indexes = partlyFullBins
++ if any (null . snd) result
then map fst . take 1 . filter (null . snd) $ result
else if null partlyFullBins
then map fst. head . groupBy (\a b -> length (snd a) == length (snd b)) . sort $ result
else []
Consider the following code I wrote:
import Control.Monad
increasing :: Integer -> [Integer]
increasing n
| n == 1 = [1..9]
| otherwise = do let ps = increasing (n - 1)
let last = liftM2 mod ps [10]
let next = liftM2 (*) ps [10]
alternateEndings next last
where alternateEndings xs ys = concat $ zipWith alts xs ys
alts x y = liftM2 (+) [x] [y..9]
Where 'increasing n' should return a list of n-digit numbers whose numbers increase (or stay the same) from left-to-right.
Is there a way to simplify this? The use of 'let' and 'liftM2' everywhere looks ugly to me. I think I'm missing something vital about the list monad, but I can't seem to get rid of them.
Well, as far as liftM functions go, my preferred way to use those is the combinators defined in Control.Applicative. Using those, you'd be able to write last = mod <$> ps <*> [10]. The ap function from Control.Monad does the same thing, but I prefer the infix version.
What (<$>) and (<*>) goes like this: liftM2 turns a function a -> b -> c into a function m a -> m b -> m c. Plain liftM is just (a -> b) -> (m a -> m b), which is the same as fmap and also (<$>).
What happens if you do that to a multi-argument function? It turns something like a -> b -> c -> d into m a -> m (b -> c -> d). This is where ap or (<*>) come in: what they do is turn something like m (a -> b) into m a -> m b. So you can keep stringing it along that way for as many arguments as you like.
That said, Travis Brown is correct that, in this case, it seems you don't really need any of the above. In fact, you can simplify your function a great deal: For instance, both last and next can be written as single-argument functions mapped over the same list, ps, and zipWith is the same as a zip and a map. All of these maps can be combined and pushed down into the alts function. This makes alts a single-argument function, eliminating the zip as well. Finally, the concat can be combined with the map as concatMap or, if preferred, (>>=). Here's what it ends up:
increasing' :: Integer -> [Integer]
increasing' 1 = [1..9]
increasing' n = increasing' (n - 1) >>= alts
where alts x = map ((x * 10) +) [mod x 10..9]
Note that all refactoring I did to get to that version from yours was purely syntactic, only applying transformations that should have no impact on the result of the function. Equational reasoning and referential transparency are nice!
I think what you are trying to do is this:
increasing :: Integer -> [Integer]
increasing 1 = [1..9]
increasing n = do p <- increasing (n - 1)
let last = p `mod` 10
next = p * 10
alt <- [last .. 9]
return $ next + alt
Or, using a "list comprehension", which is just special monad syntax for lists:
increasing2 :: Integer -> [Integer]
increasing2 1 = [1..9]
increasing2 n = [next + alt | p <- increasing (n - 1),
let last = p `mod` 10
next = p * 10,
alt <- [last .. 9]
]
The idea in the list monad is that you use "bind" (<-) to iterate over a list of values, and let to compute a single value based on what you have so far in the current iteration. When you use bind a second time, the iterations are nested from that point on.
It looks very unusual to me to use liftM2 (or <$> and <*>) when one of the arguments is always a singleton list. Why not just use map? The following does the same thing as your code:
increasing :: Integer -> [Integer]
increasing n
| n == 1 = [1..9]
| otherwise = do let ps = increasing (n - 1)
let last = map (flip mod 10) ps
let next = map (10 *) ps
alternateEndings next last
where alternateEndings xs ys = concat $ zipWith alts xs ys
alts x y = map (x +) [y..9]
Here's how I'd write your code:
increasing :: Integer -> [Integer]
increasing 1 = [1..9]
increasing n = let allEndings x = map (10*x +) [x `mod` 10 .. 9]
in concatMap allEndings $ increasing (n - 1)
I arrived at this code as follows. The first thing I did was to use pattern matching instead of guards, since it's clearer here. The next thing I did was to eliminate the liftM2s. They're unnecessary here, because they're always called with one size-one list; in that case, it's the same as calling map. So liftM2 (*) ps [10] is just map (* 10) ps, and similarly for the other call sites. If you want a general replacement for liftM2, though, you can use Control.Applicative's <$> (which is just fmap) and <*> to replace liftMn for any n: liftMn f a b c ... z becomes f <$> a <*> b <*> c <*> ... <*> z. Whether or not it's nicer is a matter of taste; I happen to like it.1 But here, we can eliminate that entirely.
The next place I simplified the original code is the do .... You never actually take advantage of the fact that you're in a do-block, and so that code can become
let ps = increasing (n - 1)
last = map (`mod` 10) ps
next = map (* 10) ps
in alternateEndings next last
From here, arriving at my code essentially involved writing fusing all of your maps together. One of the only remaining calls that wasn't a map was zipWith. But because you effectively have zipWith alts next last, you only work with 10*p and p `mod` 10 at the same time, so we can calculate them in the same function. This leads to
let ps = increasing (n - 1)
in concat $ map alts ps
where alts p = map (10*p +) [y `mod` 10..9]
And this is basically my code: concat $ map ... should always become concatMap (which, incidentally, is =<< in the list monad), we only use ps once so we can fold it in, and I prefer let to where.
1: Technically, this only works for Applicatives, so if you happen to be using a monad which hasn't been made one, <$> is `liftM` and <*> is `ap`. All monads can be made applicative functors, though, and many of them have been.
I think it's cleaner to pass last digit in a separate parameter and use lists.
f a 0 = [[]]
f a n = do x <- [a..9]
k <- f x (n-1)
return (x:k)
num = foldl (\x y -> 10*x + y) 0
increasing = map num . f 1
I am an absolute newbie in Haskell yet trying to understand how it works.
I want to write my own lazy list of integers such as [1,2,3,4,5...].
For list of ones I have written
ones = 1 : ones
and when tried, works fine:
*Main> take 10 ones
[1,1,1,1,1,1,1,1,1,1]
How can I do the same for increasing integers ?
I have tried this but it indeed fails:
int = 1 : head[ int + 1]
And after that how can I make a method that multiplies two streams? such as:
mulstream s1 s2 = head[s1] * head[s2] : mulstream [tail s1] [tail s2]
The reasons that int = 1 : head [ int + 1] doesn't work are:
head returns a single element, but the second argument to : needs to be a list.
int + 1 tries to add a list and a number, which isn't possible.
The easiest way to create the list counting up from 1 to infinity is [1..]
To count in steps other than 1 you can use [firstElement, secondElement ..], e.g. to create a list of all positive odd integers: [1, 3 ..]
To get infinite lists of the form [x, f x, f (f x), f (f (f x)),...] you can use iterate f x, e.g. iterate (*2) 1 will return the list [1, 2, 4, 16,...].
To apply an operation pairwise on each pair of elements of two list, use zipWith:
mulstream s1 s2 = zipWith (*) s1 s2
To make this definition more concise you can use the point-free form:
mulstream = zipWith (*)
For natural numbers you have to use map:
num1 = 1 : map (+1) num1
Or comprehensions:
num2 = 1 : [x+1 | x <- num2]
Or of course:
num3 = [1..]
There is syntax for this in the langauge:
take 10 [1,2..]
=> [1,2,3,4,5,6,7,8,9,10]
You can even do different strides:
take 10 [1,3..]
=> [1,3,5,7,9,11,13,15,17,19]
I'm not sure if this is what you were asking, but it would seem to me that you wanted to build a list of increasing natural numbers, without relying on any other list. So, by that token, you can do things like
incr a = a : inrc (a+1)
lst = inrc 1
take 3 lst
=> [1,2,3]
That, technically, is called an accumulating function (I believe) and then all we did is make a special case of it easily usable with 'lst'
You can go mad from there, doing things like:
lst = 1 : incr lst where incr a = (head a) + 1 : incr (tail a)
take 3 lst
=> [1,2,3]
and so on, though that probably relies on some stuff that you wont have learned yet (where) - judging by the OP - but it should still read pretty easily.
Oh, right, and then the list multiplication. Well, you can use zipWith (*) as mentioned above, or you could reinvent the wheel like this (it's more fun, trust me :)
lmul a b = (head a * head b) : lmul (tail a) (tail b)
safemul a b
| null a || null b = []
| otherwise
= (head a * head b) : safemul (tail a) (tail b)
The reason for safemul, I believe, you can find out by experimenting with the function lmul, but it has to do with 'tail' (and 'head' as well). The trouble is, there's no case for an empty list, mismatched lists, and so on in lmul, so you're either going to have to hack together various definitions (lmul _ [] = []) or use guards and or where and so on ... or stick with zipWith :)
You can define a list of ones up to a certain number and then sum the first to the second by keeping the former intact (and so on) like this:
ones :: Integer -> [Integer]
ones n
| n <= 0 = []
| otherwise = one n []
where
one 1 a = (1:a)
one n a = one (n-k) (one k a)
where
k = (n-1)
sumOf :: [Integer] -> [Integer]
sumOf l = sof l []
where
sof [] a = a
sof (x:[]) a = (x:a)
sof (x:y:zs) a = sof (x:a) (sof ((x+y):zs) a)
Since they're all ones, you can increment them in any way that you feel like, from left to right, to a middle point and so on, by changing the order of their sum. You can test this up to one hundred (or more) by using:
(sumOf . ones) 100
Edit: for its simplification, read the comments below by Will Ness.