clojure "look-and say" sequence - clojure

Working on some 2015 AoC problems to learn clojure... The below was quick enough for the 40th iteration, but crawls to a halt much after that. I compared to a few other peoples solutions and it's not immediately obvious to me why this is so slow. I tried to use recur believing it to be about as efficient as a loop (and to avoid stack consumption). One thing I'm not 100% clear on is if there is a sigifnicant difference between just using recur, versus using loop recur. I tested it both ways and saw no difference.
(def data "3113322113")
(defn encode-string [data results count]
(let [prev (first data)
curr (second data)]
(cond (empty? data) results
(not= prev curr)
(recur (rest data) (str results count prev) 1)
:else (recur (rest data) results (inc count)))))
(count
(nth (iterate #(encode-string % "" 1) data) 40 #_50))
An example of a solution I benchmarked against is Bruce Hauman's, which is really nice:
(defn count-encode [x]
(apply str
(mapcat
(juxt count first)
(partition-by identity x))))
I realize in my solution I am iterating over very large strings repeatedly, but I don't see how Bruce's is so much faster since although he is not explicitly iterating, partition is probably iterating behind the scenes.

Your version is computing something like
(str "11" (str "22" (str "31" ...)))
which is building up a brand-new String object for every two characters. Since this involves iterating over every character in the input and output strings at each step, your operation is quadratic in the length of the string.
The solution you're comparing to is different: it builds up a lazy sequence of integers, which is a linear-time process. Then, it does something like
(apply str [1 1 2 2 3 1])
which is the same as
(str 1 1 2 2 3 1 ...)
and str, when called with multiple arguments, uses a StringBuilder to efficiently build up the result incrementally, an optimization that is not available if you demand a full-fledged String object at every intermediate step. As a result, the whole process is linear-time, rather than quadratic.

Related

What scope should calls to lazy-seq have?

I'm writing a lazy implementation of the Recamán's Sequence, and ran into some confusion regarding where calls to lazy-seq should happen.
This first version I came up with this morning was:
(defn lazy-recamans-sequence []
(let [f (fn rec [n seen last-s]
(let [back (- last-s n)
new-s (if (and (pos? back) (not (seen back)))
back
(+ last-s n))]
(lazy-seq ; Here
(cons new-s (rec (inc n) (conj seen new-s) new-s)))))]
(f 0 #{} 0)))
Then I realized that my placement of lazy-seq was kind of arbitrary, and that it could be placed higher to wrap more of the computations:
(defn lazy-recamans-sequence2 []
(let [f (fn rec [n seen last-s]
(lazy-seq ; Here
(let [back (- last-s n)
new-s (if (and (pos? back) (not (seen back)))
back
(+ last-s n))]
(cons new-s (rec (inc n) (conj seen new-s) new-s)))))]
(f 0 #{} 0)))
Then I looked back on a review that someone gave me last night:
(defn recaman []
(letfn [(tail [previous n seen]
(let [nx (if (and (> previous n) (not (seen (- previous n))))
(- previous n)
(+ previous n))]
; Here, inside "cons"
(cons nx (lazy-seq (tail nx (inc n) (conj seen nx))))))]
(tail 0 0 #{})))
And they have theirs inside of the call to cons!
Thinking this over, it seems like it wouldn't make a difference. With a broader scope (like the second version), more code is inside the explicit function that's passed to LazySeq. With a narrower scope however, the function itself may be smaller, but since the passed function involves a recursive call, it will be executing the same code anyways.
They seem to preform nearly identically and give the same answers. Is there any reason to prefer placing lazy-seq in one place over another? Is this simply a stylistic choice, or can this have actual repercussions?
In the first two examples the lazy-seq wraps the cons call. This means that when you generate call the function you return a lazy sequence immediately without calculating the first item of the sequence.
In the first example the let expression is still outside of lazy-seq so the value of the first item is calculated immediately but the returned sequence is still lazy and not realized.
The second example is similar to the first. The lazy-seq wraps the cons cell and also the let block. This means that the function will return immediatetly and the value of the first item is calculated only when the caller starts to consume the lazy sequence.
In the third example the value of the first item in the list is calculated immediately and only the tail of the returned sequence is lazy.
Is there any reason to prefer placing lazy-seq in one place over another?
It depends on what you want to achieve. Do you want to return a sequence immediately without calculating any values? In this case make the scope of lazy-seq as broad as possible. Otherwise try to restrict the scope of lazy-seq to calculate only the tail part of the sequence.
When I was first learning Clojure, I was a bit confused by the many possible choices of lazy-seq constructs, the lack of clarity in terms of which construct to choose, and the somewhat vague explanation for how lazy-seq creates laziness in the first place (it is implemented as a Java class of ~240 lines).
To reduce repetition and keep things as simple as possible, I created the lazy-cons macro. It is used like so:
(defn lazy-countdown [n]
(when (<= 0 n)
(lazy-cons n (lazy-countdown (dec n)))))
(deftest t-all
(is= (lazy-countdown 5) [5 4 3 2 1 0] )
(is= (lazy-countdown 1) [1 0] )
(is= (lazy-countdown 0) [0] )
(is= (lazy-countdown -1) nil ))
This version does realize the initial value n immediately.
I never worry about chunking (typically batches of 32) or trying to precisely control the number of elements realized in a lazy sequence. IMHO, if you need fine-grained control such as this, it is better to use an explicit loop than to make assumptions on the timing of realizations in a lazy sequence.

How to forget head(GC'd) for lazy-sequences in Clojure?

Let's say I have a huge lazy seq and I want to iterate it so I can process on the data that I get during the iteration.
The thing is I want to lose head(GC'd) of lazy seq(that processed) so I can work on seqs that have millions of data without having OutofMemoryException.
I have 3 examples that I'm not sure.
Could you provide best practices(examples) for that purpose?
Do these functions lose head?
Example 1
(defn lose-head-fn
[lazy-seq-coll]
(when (seq (take 1 lazy-seq-coll))
(do
;;do some processing...(take 10000 lazy-seq-coll)
(recur (drop 10000 lazy-seq-coll)))))
Example 2
(defn lose-head-fn
[lazy-seq-coll]
(loop [i lazy-seq-coll]
(when (seq (take 1 i))
(do
;;do some processing...(take 10000 i)
(recur (drop 10000 i))))))
Example 3
(doseq [i lazy-seq-coll]
;;do some processing...
)
Update: Also there is an explanation in this answer here
copy of my above comments
As far as I know, all of the above would lose head (first two are obvious, since you manually drop the head, while doseq's doc claims that it doesn't retain head).
That means that if the lazy-seq-coll you pass to the function isn't bound somewhere else with def or let and used later, there should be nothing to worry about. So (lose-head-fn (range)) won't eat all your memory, while
(def r (range))
(lose-head-fn r)
probably would.
And the only best practice I could think of is not to def possibly infinite (or just huge) sequences, because all of their realized items would live forever in the var.
In general, you must be careful not to retain a reference either locally or globally for a part of a lazy seq that precedes another which involves excessive computation.
For example:
(let [nums (range)
first-ten (take 10 nums)]
(+ (last first-ten) (nth nums 100000000)))
=> 100000009
This takes about 2 seconds on a modern machine. How about this though? The difference is the last line, where the order of arguments to + is swapped:
;; Don't do this!
(let [nums (range)
first-ten (take 10 nums)]
(+ (nth nums 100000000) (last first-ten)))
You'll hear your chassis/cpu fans come to life, and if you're running htop or similar, you'll see memory usage grow rather quickly (about 1G in the first several seconds for me).
What's going on?
Much like a linked list, elements in a lazy seq in clojure reference the portion of the seq that comes next. In the second example above, first-ten is needed for the second argument to +. Thus, even though nth is happy to hold no references to anything (after all, it's just finding an index in a long list), first-ten refers to a portion of the sequence that, as stated above, must hold onto references to the rest of the sequence.
The first example, by contrast, computes (last first-ten), and after this, first-ten is no longer used. Now the only reference to any portion of the lazy sequence is nums. As nth does its work, each portion of the list that it's finished with is no longer needed, and since nothing else refers to the list in this block, as nth walks the list, the memory taken by the sequence that has been examined can be garbage collected.
Consider this:
;; Don't do this!
(let [nums (range)]
(time (nth nums 1e8))
(time (nth nums 1e8)))
Why does this have a similar result as the second example above? Because the sequence will be cached (held in memory) on the first realization of it (the first (time (nth nums 1e8))), because nums is being used on the next line. If, instead, we use a different sequence for the second nth, then there is no need to cache the first one, so it can be discarded as it's processed:
(let [nums (range)]
(time (nth nums 1e8))
(time (nth (range) 1e8)))
"Elapsed time: 2127.814253 msecs"
"Elapsed time: 2042.608043 msecs"
So as you work with large lazy seqs, consider whether anything is still pointing to the list, and if anything is (global vars being a common one), then it will be held in memory.

Lazy partition-by

I have a source of items and want to separately process runs of them having the same value of a key function. In Python this would look like
for key_val, part in itertools.groupby(src, key_fn):
process(key_val, part)
This solution is completely lazy, i.e. if process doesn't try to store contents of entire part, the code would run in O(1) memory.
Clojure solution
(doseq [part (partition-by key-fn src)]
(process part))
is less lazy: it realizes each part completely. The problem is, src might have very long runs of items with the same key-fn value and realizing them might lead to OOM.
I've found this discussion where it's claimed that the following function (slightly modified for naming consistency inside post) is lazy enough
(defn lazy-partition-by [key-fn coll]
(lazy-seq
(when-let [s (seq coll)]
(let [fst (first s)
fv (key-fn fst)
part (lazy-seq (cons fst (take-while #(= fv (key-fn %)) (next s))))]
(cons part (lazy-partition-by key-fn (drop-while #(= fv (key-fn %)) s)))))))
However, I don't understand why it doesn't suffer from OOM: both parts of the cons cell hold a reference to s, so while process consumes part, s is being realized but not garbage collected. It would become eligible for GC only when drop-while traverses part.
So, my questions are:
Am I correct about lazy-partition-by not being lazy enough?
Is there an implementation of partition-by with guaranteed memory requirements, provided I don't hold any references to the previous part by the time I start realizing the next one?
EDIT:
Here's a lazy enough implementation in Haskell:
lazyPartitionBy :: Eq b => (a -> b) -> [a] -> [[a]]
lazyPartitionBy _ [] = []
lazyPartitionBy keyFn xl#(x:_) = let
fv = keyFn x
(part, rest) = span ((== fv) . keyFn) xl
in part : lazyPartitionBy keyFn rest
As can be seen from span implementation, part and rest implicitly share state. I wonder if this method could be translated into Clojure.
The rule of thumb that I use in these scenarios (ie, those in which you want a single input sequence to produce multiple output sequences) is that, of the following three desirable properties, you can generally have only two:
Efficiency (traverse the input sequence only once, thus not hold its head)
Laziness (produce elements only on demand)
No shared mutable state
The version in clojure.core chooses (1,3), but gives up on (2) by producing an entire partition all at once. Python and Haskell both choose (1,2), although it's not immediately obvious: doesn't Haskell have no mutable state at all? Well, its lazy evaluation of everything (not just sequences) means that all expressions are thunks, which start out as blank slates and only get written to when their value is needed; the implementation of span, as you say, shares the same thunk of span p xs' in both of its output sequences, so that whichever one needs it first "sends" it to the result of the other sequence, effecting the action at a distance that's necessary to preserve the other nice properties.
The alternative Clojure implementation you linked to chooses (2,3), as you noted.
The problem is that for partition-by, declining either (1) or (2) means that you're holding the head of some sequence: either the input or one of the outputs. So if you want a solution where it's possible to handle arbitrarily large partitions of an arbitrarily large input, you need to choose (1,2). There are a few ways you could do this in Clojure:
Take the Python approach: return something more like an iterator than a seq - seqs make stronger guarantees about non-mutation, and promise that you can safely traverse them multiple times, etc etc. If instead of a seq of seqs you return an iterator of iterators, then consuming items from any one iterator can freely mutate or invalidate the other(s). This guarantees consumption happens in order and that memory can be freed up.
Take the Haskell approach: manually thunk everything, with lots of calls to delay, and require the client to call force as often as necessary to get data out. This will be a lot uglier in Clojure, and will greatly increase your stack depth (using this on a non-trivial input will probably blow the stack), but it is theoretically possible.
Write something more Clojure-flavored (but still quite unusual) by having a few mutable data objects that are coordinated between the output seqs, each updated as needed when something is requested from any of them.
I'm pretty sure any of these three approaches are possible, but to be honest they're all pretty hard and not at all natural. Clojure's sequence abstraction just doesn't make it easy to produce a data structure that's what you'd like. My advice is that if you need something like this and the partitions may be too large to fit comfortably, you just accept a slightly different format and do a little more bookkeeping yourself: dodge the (1,2,3) dilemma by not producing multiple output sequences at all!
So instead of ((2 4 6 8) (1 3 5) (10 12) (7)) being your output format for something like (partition-by even? [2 4 6 8 1 3 5 10 12 7]), you could accept a slightly uglier format: ([::key true] 2 4 6 8 [::key false] 1 3 5 [::key true] 10 12 [::key false] 7). This is neither hard to produce nor hard to consume, although it is a bit lengthy and tedious to write out.
Here is one reasonable implementation of the producing function:
(defn lazy-partition-by [f coll]
(lazy-seq
(when (seq coll)
(let [x (first coll)
k (f x)]
(list* [::key k] x
((fn part [k xs]
(lazy-seq
(when (seq xs)
(let [x (first xs)
k' (f x)]
(if (= k k')
(cons x (part k (rest xs)))
(list* [::key k'] x (part k' (rest xs))))))))
k (rest coll)))))))
And here's how to consume it, first defining a generic reduce-grouped that hides the details of the grouping format, and then an example function count-partition-sizes to output the key and size of each partition without keeping any sequences in memory:
(defn reduce-grouped [f init groups]
(loop [k nil, acc init, coll groups]
(if (empty? coll)
acc
(if (and (coll? (first coll)) (= ::key (ffirst coll)))
(recur (second (first coll)) acc (rest coll))
(recur k (f k acc (first coll)) (rest coll))))))
(defn count-partition-sizes [f coll]
(reduce-grouped (fn [k acc _]
(if (and (seq acc) (= k (first (peek acc))))
(conj (pop acc) (update-in (peek acc) [1] inc))
(conj acc [k 1])))
[] (lazy-partition-by f coll)))
user> (lazy-partition-by even? [2 4 6 8 1 3 5 10 12 7])
([:user/key true] 2 4 6 8 [:user/key false] 1 3 5 [:user/key true] 10 12 [:user/key false] 7)
user> (count-partition-sizes even? [2 4 6 8 1 3 5 10 12 7])
[[true 4] [false 3] [true 2] [false 1]]
Edit: Looking at it again, I'm not really convinced my reduce-grouped is much more useful than (reduce f init (map g xs)), since it doesn't really give you any clear indication of when the key changes. So if you do need to know when a group changes, you'll want a smarter abstraction, or to use my original lazy-partition-by with nothing "clever" wrapping it.
Although this question evokes very interesting contemplation about language design, the practical problem is you want to process on partitions in constant memory. And the practical problem is resolvable with a little inversion.
Rather than processing over the result of a function that returns a sequence of partitions, pass the processing function into the function that produces the partitions. Then, you can control state in a contained manner.
First we'll provide a way to fuse together the consumption of the sequence with the state of the tail.
(defn fuse [coll wick]
(lazy-seq
(when-let [s (seq coll)]
(swap! wick rest)
(cons (first s) (fuse (rest s) wick)))))
Then a modified version of partition-by
(defn process-partition-by [processfn keyfn coll]
(lazy-seq
(when (seq coll)
(let [tail (atom (cons nil coll))
s (fuse coll tail)
fst (first s)
fv (keyfn fst)
pred #(= fv (keyfn %))
part (take-while pred s)
more (lazy-seq (drop-while pred #tail))]
(cons (processfn part)
(process-partition-by processfn keyfn more))))))
Note: For O(1) memory consumption processfn must be an eager consumer! So while (process-partition-by identity key-fn coll) is the same as (partition-by key-fn coll), because identity does not consume the partition, the memory consumption is not constant.
Tests
(defn heavy-seq []
;adjust payload for your JVM so only a few fit in memory
(let [payload (fn [] (long-array 20000000))]
(map #(vector % (payload)) (iterate inc 0))))
(defn my-process [s] (reduce + (map first s)))
(defn test1 []
(doseq [part (partition-by #(quot (first %) 10) (take 50 (heavy-seq)))]
(my-process part)))
(defn test2 []
(process-partition-by
my-process #(quot (first %) 20) (take 200 (heavy-seq))))
so.core=> (test1)
OutOfMemoryError Java heap space [trace missing]
so.core=> (test2)
(190 590 990 1390 1790 2190 2590 2990 3390 3790)
Am I correct about lazy-partition-by not being lazy enough?
Well, there's a difference between laziness and memory usage. A sequence can be lazy and still require lots of memory - see for instance the implementation of clojure.core/distinct, which uses a set to remember all the previously observed values in the sequence. But yes, your analysis of the memory requirements of lazy-partition-by is correct - the function call to compute the head of the second partition will retain the head of the first partition, which means that realizing the first partition causes it to be retained in-memory. This can be verified with the following code:
user> (doseq [part (lazy-partition-by :a
(repeatedly
(fn [] {:a 1 :b (long-array 10000000)})))]
(dorun part))
; => OutOfMemoryError Java heap space
Since neither doseq nor dorun retains the head, this would simply run forever if lazy-partition-by were O(1) in memory.
Is there an implementation of partition-by with guaranteed memory requirements, provided I don't hold any references to the previous part by the time I start realizing the next one?
It would be very difficult, if not impossible, to write such an implementation in a purely functional manner that would work for the general case. Consider that a general lazy-partition-by implementation cannot make any assumptions about when (or if) a partition is realized. The only guaranteed correct way of finding the start of the second partition, short of introducing some nasty bit of statefulness to keep track of how much of the first partition has been realized, is to remember where the first partition began and scan forward when requested.
For the special case where you're processing records one at a time for side effects and want them grouped by key (as is implied by your use of doseq above), you might consider something along the lines of a loop/recur which maintains a state and re-sets it when the key changes.

build set lazily in clojure

I've started to learn clojure but I'm having trouble wrapping my mind around certain concepts. For instance, what I want to do here is to take this function and convert it so that it calls get-origlabels lazily.
(defn get-all-origlabels []
(set (flatten (map get-origlabels (range *song-count*)))))
My first attempt used recursion but blew up the stack (song-count is about 10,000). I couldn't figure out how to do it with tail recursion.
get-origlabels returns a set each time it is called, but values are often repeated between calls. What the get-origlabels function actually does is read a file (a different file for each value from 0 to song-count-1) and return the words stored in it in a set.
Any pointers would be greatly appreciated!
Thanks!
-Philip
You can get what you want by using mapcat. I believe putting it into an actual Clojure set is going to de-lazify it, as demonstrated by the fact that (take 10 (set (iterate inc 0))) tries to realize the whole set before taking 10.
(distinct (mapcat get-origlabels (range *song-count*)))
This will give you a lazy sequence. You can verify that by doing something like, starting with an infinite sequence:
(->> (iterate inc 0)
(mapcat vector)
(distinct)
(take 10))
You'll end up with a seq, rather than a set, but since it sounds like you really want laziness here, I think that's for the best.
This may have better stack behavior
(defn get-all-origlabels []
(reduce (fn (s x) (union s (get-origlabels x))) ${} (range *song-count*)))
I'd probably use something like:
(into #{} (mapcat get-origlabels (range *song-count*)))
In general, "into" is very helpful when constructing Clojure data structures. I have this mental image of a conveyor belt (a sequence) dropping a bunch of random objects into a large bucket (the target collection).

Problem with Clojure function

everyone, I've started working yesterday on the Euler Project in Clojure and I have a problem with one of my solutions I cannot figure out.
I have this function:
(defn find-max-palindrom-in-range [beg end]
(reduce max
(loop [n beg result []]
(if (>= n end)
result
(recur (inc n)
(concat result
(filter #(is-palindrom? %)
(map #(* n %) (range beg end)))))))))
I try to run it like this:
(find-max-palindrom-in-range 100 1000)
and I get this exception:
java.lang.Integer cannot be cast to clojure.lang.IFn
[Thrown class java.lang.ClassCastException]
which I presume means that at some place I'm trying to evaluate an Integer as a function. I however cannot find this place and what puzzles me more is that everything works if I simply evaluate it like this:
(reduce max
(loop [n 100 result []]
(if (>= n 1000)
result
(recur (inc n)
(concat result
(filter #(is-palindrom? %)
(map #(* n %) (range 100 1000))))))))
(I've just stripped down the function definition and replaced the parameters with constants)
Thanks in advance for your help and sorry that I probably bother you with idiotic mistake on my part. Btw I'm using Clojure 1.1 and the newest SLIME from ELPA.
Edit: Here is the code to is-palindrom?. I've implemented it as a text property of the number, not a numeric one.
(defn is-palindrom? [n]
(loop [num (String/valueOf n)]
(cond (not (= (first num) (last num))) false
(<= (.length num) 1) true
:else (recur (.substring num 1 (dec (.length num)))))))
The code works at my REPL (1.1). I'd suggest that you paste it back at yours and try again -- perhaps you simply mistyped something?
Having said that, you could use this as an opportunity to make the code simpler and more obviously correct. Some low-hanging fruit (don't read if you think it could take away from your Project Euler fun, though with your logic already written down I think it shouldn't):
You don't need to wrap is-palindrome? in an anonymous function to pass it to filter. Just write (filter is-palindrome? ...) instead.
That loop in is-palindrome? is pretty complex. Moreover, it's not particularly efficient (first and last both make a seq out of the string first, then last needs to traverse all of it). It would be simpler and faster to (require '[clojure.contrib.str-utils2 :as str]) and use (= num (str/reverse num)).
Since I mentioned efficiency, using concat in this manner is a tad dangerous -- it creates a lazy seq, which might blow up if you pile up two many levels of laziness (this will not matter in the context of Euler 4, but it's good to keep it in mind). If you really need to extend vectors to the right, prefer into.
To further simplify things, you could consider breaking them apart into a function to filter a given sequence so that only palindromes remain and a separate function to return all products of two three-digit numbers. The latter can be accomplished with e.g.
(for [f (range 100 1000)
s (range 100 1000)
:when (<= f s)] ; avoid duplication of effort
(* f s))