SPOILER ALERT: This question refers to a 4clojure.com question. If you're a Clojure newbie like me, you probably want to try it by yourself first.
I answered this question (#68) successfully, but only after making a mistake which I'm still not sure why is wrong.
Here's the question:
(= __
(loop [x 5
result []]
(if (> x 0)
(recur (dec x) (conj result (+ 2 x)))
result)))
My initial answer was [6 5 4 3 2] while the accepted answer is [7 6 5 4 3].
I don't totally grok it, because (dec x) precedes (conj result (+ 2 x)), and they're both equally nested in the loop. I'd thought that because the decrement seems to happen before the conjoin, the result vector would begin with a decremented x plus two. But it's not happening that way. This is clearly something very basic, but perhaps someone might explain it what's going on?
Thanks.
Variables in clojure and immutable (mostly, anyway). (dec x) does not actually decrease x in the current scope. It returns the result of (dec x), which is then used as an argument to recur, which you can treat as another call of the loop "function". So x is not changed in this scope. (conj result (+ 2 x)) uses the same variable x with the same old value.
Related
I'm writing a lazy implementation of the Recamán's Sequence, and ran into some confusion regarding where calls to lazy-seq should happen.
This first version I came up with this morning was:
(defn lazy-recamans-sequence []
(let [f (fn rec [n seen last-s]
(let [back (- last-s n)
new-s (if (and (pos? back) (not (seen back)))
back
(+ last-s n))]
(lazy-seq ; Here
(cons new-s (rec (inc n) (conj seen new-s) new-s)))))]
(f 0 #{} 0)))
Then I realized that my placement of lazy-seq was kind of arbitrary, and that it could be placed higher to wrap more of the computations:
(defn lazy-recamans-sequence2 []
(let [f (fn rec [n seen last-s]
(lazy-seq ; Here
(let [back (- last-s n)
new-s (if (and (pos? back) (not (seen back)))
back
(+ last-s n))]
(cons new-s (rec (inc n) (conj seen new-s) new-s)))))]
(f 0 #{} 0)))
Then I looked back on a review that someone gave me last night:
(defn recaman []
(letfn [(tail [previous n seen]
(let [nx (if (and (> previous n) (not (seen (- previous n))))
(- previous n)
(+ previous n))]
; Here, inside "cons"
(cons nx (lazy-seq (tail nx (inc n) (conj seen nx))))))]
(tail 0 0 #{})))
And they have theirs inside of the call to cons!
Thinking this over, it seems like it wouldn't make a difference. With a broader scope (like the second version), more code is inside the explicit function that's passed to LazySeq. With a narrower scope however, the function itself may be smaller, but since the passed function involves a recursive call, it will be executing the same code anyways.
They seem to preform nearly identically and give the same answers. Is there any reason to prefer placing lazy-seq in one place over another? Is this simply a stylistic choice, or can this have actual repercussions?
In the first two examples the lazy-seq wraps the cons call. This means that when you generate call the function you return a lazy sequence immediately without calculating the first item of the sequence.
In the first example the let expression is still outside of lazy-seq so the value of the first item is calculated immediately but the returned sequence is still lazy and not realized.
The second example is similar to the first. The lazy-seq wraps the cons cell and also the let block. This means that the function will return immediatetly and the value of the first item is calculated only when the caller starts to consume the lazy sequence.
In the third example the value of the first item in the list is calculated immediately and only the tail of the returned sequence is lazy.
Is there any reason to prefer placing lazy-seq in one place over another?
It depends on what you want to achieve. Do you want to return a sequence immediately without calculating any values? In this case make the scope of lazy-seq as broad as possible. Otherwise try to restrict the scope of lazy-seq to calculate only the tail part of the sequence.
When I was first learning Clojure, I was a bit confused by the many possible choices of lazy-seq constructs, the lack of clarity in terms of which construct to choose, and the somewhat vague explanation for how lazy-seq creates laziness in the first place (it is implemented as a Java class of ~240 lines).
To reduce repetition and keep things as simple as possible, I created the lazy-cons macro. It is used like so:
(defn lazy-countdown [n]
(when (<= 0 n)
(lazy-cons n (lazy-countdown (dec n)))))
(deftest t-all
(is= (lazy-countdown 5) [5 4 3 2 1 0] )
(is= (lazy-countdown 1) [1 0] )
(is= (lazy-countdown 0) [0] )
(is= (lazy-countdown -1) nil ))
This version does realize the initial value n immediately.
I never worry about chunking (typically batches of 32) or trying to precisely control the number of elements realized in a lazy sequence. IMHO, if you need fine-grained control such as this, it is better to use an explicit loop than to make assumptions on the timing of realizations in a lazy sequence.
Newbie Clojure question. What are the pros and cons of the following two ways to implement/represent the Fibonacci sequence? (In particular, is there anything to completely rule out one or the other as a bad idea.)
(ns clxp.fib
(:gen-class))
; On the one hand, it seems more natural in code to have a function that
; returns 'a' Fibonacci sequence.
(defn fib-1
"Returns an infinite sequence of Fibonnaci numbers."
[]
(map first (iterate (fn [[a b]] [b (+ a b)]) [0 1])))
; > (take 10 (fib-1))
; (0 1 1 2 3 5 8 13 21 34)
; On the other hand, it seems more mathematically natural to define 'the'
; Fibonacci sequence once, and just refer to it.
(def fib-2
"The infinite sequence of Fibonnaci numbers."
(map first (iterate (fn [[a b]] [b (+ a b)]) [0 1])))
; > (take 10 fib-2)
; (0 1 1 2 3 5 8 13 21 34)
a) What are the pros and cons of these two approaches to defining an infinite sequence? (I am aware that this is a somewhat special case in that this particular sequence requires no args to be provided - unlike, say, an infinite sequence of multiples of 'n', which I believe would require the first approach, in order to specify the value of 'n'.)
b) Is there any over-arching reason to prefer one of these implementations to the other? (Memory consumption, applicability when used as an argument, etc.)
fib-2 is preferable in favor of time performance if its elements are looked up multiple times, because in a lazy seq they only need to be calculated one time.
Due to the global binding the seq is unlikely to ever become garbage collected, so if your program will step through a million fibonaccis to make some calculation once, even more so if it doesn't need to hold the seqs head, invoking fib-1 in a local context is preferable in favor of space performance.
It depends on your usage, and how critical it is not to have to recalculate fib seq multiple times. However, from my experiments below, I had issues with the def when using long sequences.
If you're going to be referring to a lot of elements, then you'll need to watch out for head retention, as Leon mentioned.
This can be illustrated as follows (these are extending a couple of examples from Clojure Programming):
(let [[t d] (split-with #(< % 10) (take 1e6 (fib-1)))]
[(count d) (count t)])
=> OutOfMemoryError Java heap space
(let [[t d] (split-with #(< % 10) (take 1e6 (fib-1)))]
[(count t) (count d)])
=> [7 999993]
Note, i had to change your implementation to use the initial vector [0 1N] to avoid ArithmeticException integer overflow when taking large sequences of fib numbers.
Interesting, changing to using fib-2 instead yields same OOM error for holding head case, but the non-head holding version breaks:
(let [[t d] (split-with #(< % 10) (take 1e6 fib-2))]
[(count t) (count d)])
=> [7 270036]
The latter figure should be 999993.
The reason for the OOM in both cases is as stated in Clojure Programming:
Since the last reference of t occurs before the processing of d, no
reference to the head of the range sequence is kept, and no memory
issues arise.
I have a source of items and want to separately process runs of them having the same value of a key function. In Python this would look like
for key_val, part in itertools.groupby(src, key_fn):
process(key_val, part)
This solution is completely lazy, i.e. if process doesn't try to store contents of entire part, the code would run in O(1) memory.
Clojure solution
(doseq [part (partition-by key-fn src)]
(process part))
is less lazy: it realizes each part completely. The problem is, src might have very long runs of items with the same key-fn value and realizing them might lead to OOM.
I've found this discussion where it's claimed that the following function (slightly modified for naming consistency inside post) is lazy enough
(defn lazy-partition-by [key-fn coll]
(lazy-seq
(when-let [s (seq coll)]
(let [fst (first s)
fv (key-fn fst)
part (lazy-seq (cons fst (take-while #(= fv (key-fn %)) (next s))))]
(cons part (lazy-partition-by key-fn (drop-while #(= fv (key-fn %)) s)))))))
However, I don't understand why it doesn't suffer from OOM: both parts of the cons cell hold a reference to s, so while process consumes part, s is being realized but not garbage collected. It would become eligible for GC only when drop-while traverses part.
So, my questions are:
Am I correct about lazy-partition-by not being lazy enough?
Is there an implementation of partition-by with guaranteed memory requirements, provided I don't hold any references to the previous part by the time I start realizing the next one?
EDIT:
Here's a lazy enough implementation in Haskell:
lazyPartitionBy :: Eq b => (a -> b) -> [a] -> [[a]]
lazyPartitionBy _ [] = []
lazyPartitionBy keyFn xl#(x:_) = let
fv = keyFn x
(part, rest) = span ((== fv) . keyFn) xl
in part : lazyPartitionBy keyFn rest
As can be seen from span implementation, part and rest implicitly share state. I wonder if this method could be translated into Clojure.
The rule of thumb that I use in these scenarios (ie, those in which you want a single input sequence to produce multiple output sequences) is that, of the following three desirable properties, you can generally have only two:
Efficiency (traverse the input sequence only once, thus not hold its head)
Laziness (produce elements only on demand)
No shared mutable state
The version in clojure.core chooses (1,3), but gives up on (2) by producing an entire partition all at once. Python and Haskell both choose (1,2), although it's not immediately obvious: doesn't Haskell have no mutable state at all? Well, its lazy evaluation of everything (not just sequences) means that all expressions are thunks, which start out as blank slates and only get written to when their value is needed; the implementation of span, as you say, shares the same thunk of span p xs' in both of its output sequences, so that whichever one needs it first "sends" it to the result of the other sequence, effecting the action at a distance that's necessary to preserve the other nice properties.
The alternative Clojure implementation you linked to chooses (2,3), as you noted.
The problem is that for partition-by, declining either (1) or (2) means that you're holding the head of some sequence: either the input or one of the outputs. So if you want a solution where it's possible to handle arbitrarily large partitions of an arbitrarily large input, you need to choose (1,2). There are a few ways you could do this in Clojure:
Take the Python approach: return something more like an iterator than a seq - seqs make stronger guarantees about non-mutation, and promise that you can safely traverse them multiple times, etc etc. If instead of a seq of seqs you return an iterator of iterators, then consuming items from any one iterator can freely mutate or invalidate the other(s). This guarantees consumption happens in order and that memory can be freed up.
Take the Haskell approach: manually thunk everything, with lots of calls to delay, and require the client to call force as often as necessary to get data out. This will be a lot uglier in Clojure, and will greatly increase your stack depth (using this on a non-trivial input will probably blow the stack), but it is theoretically possible.
Write something more Clojure-flavored (but still quite unusual) by having a few mutable data objects that are coordinated between the output seqs, each updated as needed when something is requested from any of them.
I'm pretty sure any of these three approaches are possible, but to be honest they're all pretty hard and not at all natural. Clojure's sequence abstraction just doesn't make it easy to produce a data structure that's what you'd like. My advice is that if you need something like this and the partitions may be too large to fit comfortably, you just accept a slightly different format and do a little more bookkeeping yourself: dodge the (1,2,3) dilemma by not producing multiple output sequences at all!
So instead of ((2 4 6 8) (1 3 5) (10 12) (7)) being your output format for something like (partition-by even? [2 4 6 8 1 3 5 10 12 7]), you could accept a slightly uglier format: ([::key true] 2 4 6 8 [::key false] 1 3 5 [::key true] 10 12 [::key false] 7). This is neither hard to produce nor hard to consume, although it is a bit lengthy and tedious to write out.
Here is one reasonable implementation of the producing function:
(defn lazy-partition-by [f coll]
(lazy-seq
(when (seq coll)
(let [x (first coll)
k (f x)]
(list* [::key k] x
((fn part [k xs]
(lazy-seq
(when (seq xs)
(let [x (first xs)
k' (f x)]
(if (= k k')
(cons x (part k (rest xs)))
(list* [::key k'] x (part k' (rest xs))))))))
k (rest coll)))))))
And here's how to consume it, first defining a generic reduce-grouped that hides the details of the grouping format, and then an example function count-partition-sizes to output the key and size of each partition without keeping any sequences in memory:
(defn reduce-grouped [f init groups]
(loop [k nil, acc init, coll groups]
(if (empty? coll)
acc
(if (and (coll? (first coll)) (= ::key (ffirst coll)))
(recur (second (first coll)) acc (rest coll))
(recur k (f k acc (first coll)) (rest coll))))))
(defn count-partition-sizes [f coll]
(reduce-grouped (fn [k acc _]
(if (and (seq acc) (= k (first (peek acc))))
(conj (pop acc) (update-in (peek acc) [1] inc))
(conj acc [k 1])))
[] (lazy-partition-by f coll)))
user> (lazy-partition-by even? [2 4 6 8 1 3 5 10 12 7])
([:user/key true] 2 4 6 8 [:user/key false] 1 3 5 [:user/key true] 10 12 [:user/key false] 7)
user> (count-partition-sizes even? [2 4 6 8 1 3 5 10 12 7])
[[true 4] [false 3] [true 2] [false 1]]
Edit: Looking at it again, I'm not really convinced my reduce-grouped is much more useful than (reduce f init (map g xs)), since it doesn't really give you any clear indication of when the key changes. So if you do need to know when a group changes, you'll want a smarter abstraction, or to use my original lazy-partition-by with nothing "clever" wrapping it.
Although this question evokes very interesting contemplation about language design, the practical problem is you want to process on partitions in constant memory. And the practical problem is resolvable with a little inversion.
Rather than processing over the result of a function that returns a sequence of partitions, pass the processing function into the function that produces the partitions. Then, you can control state in a contained manner.
First we'll provide a way to fuse together the consumption of the sequence with the state of the tail.
(defn fuse [coll wick]
(lazy-seq
(when-let [s (seq coll)]
(swap! wick rest)
(cons (first s) (fuse (rest s) wick)))))
Then a modified version of partition-by
(defn process-partition-by [processfn keyfn coll]
(lazy-seq
(when (seq coll)
(let [tail (atom (cons nil coll))
s (fuse coll tail)
fst (first s)
fv (keyfn fst)
pred #(= fv (keyfn %))
part (take-while pred s)
more (lazy-seq (drop-while pred #tail))]
(cons (processfn part)
(process-partition-by processfn keyfn more))))))
Note: For O(1) memory consumption processfn must be an eager consumer! So while (process-partition-by identity key-fn coll) is the same as (partition-by key-fn coll), because identity does not consume the partition, the memory consumption is not constant.
Tests
(defn heavy-seq []
;adjust payload for your JVM so only a few fit in memory
(let [payload (fn [] (long-array 20000000))]
(map #(vector % (payload)) (iterate inc 0))))
(defn my-process [s] (reduce + (map first s)))
(defn test1 []
(doseq [part (partition-by #(quot (first %) 10) (take 50 (heavy-seq)))]
(my-process part)))
(defn test2 []
(process-partition-by
my-process #(quot (first %) 20) (take 200 (heavy-seq))))
so.core=> (test1)
OutOfMemoryError Java heap space [trace missing]
so.core=> (test2)
(190 590 990 1390 1790 2190 2590 2990 3390 3790)
Am I correct about lazy-partition-by not being lazy enough?
Well, there's a difference between laziness and memory usage. A sequence can be lazy and still require lots of memory - see for instance the implementation of clojure.core/distinct, which uses a set to remember all the previously observed values in the sequence. But yes, your analysis of the memory requirements of lazy-partition-by is correct - the function call to compute the head of the second partition will retain the head of the first partition, which means that realizing the first partition causes it to be retained in-memory. This can be verified with the following code:
user> (doseq [part (lazy-partition-by :a
(repeatedly
(fn [] {:a 1 :b (long-array 10000000)})))]
(dorun part))
; => OutOfMemoryError Java heap space
Since neither doseq nor dorun retains the head, this would simply run forever if lazy-partition-by were O(1) in memory.
Is there an implementation of partition-by with guaranteed memory requirements, provided I don't hold any references to the previous part by the time I start realizing the next one?
It would be very difficult, if not impossible, to write such an implementation in a purely functional manner that would work for the general case. Consider that a general lazy-partition-by implementation cannot make any assumptions about when (or if) a partition is realized. The only guaranteed correct way of finding the start of the second partition, short of introducing some nasty bit of statefulness to keep track of how much of the first partition has been realized, is to remember where the first partition began and scan forward when requested.
For the special case where you're processing records one at a time for side effects and want them grouped by key (as is implied by your use of doseq above), you might consider something along the lines of a loop/recur which maintains a state and re-sets it when the key changes.
I have a question regarding nested doseq loops. In the start function, once I find an answer I set the atom to true, so that the outer loop validation with :while fails. However it seems that it doesn't break it, and the loops keep on going. What's wrong with it?
I am also quite confused with the usage of atoms, refs, agents (Why do they have different names for the update functions when then the mechanism is almost the same?) etc.
Is it okay to use an atom in this situation as a flag? Obviously I need a a variable like object to store a state.
(def pentagonal-list (map (fn [a] (/ (* a (dec (* 3 a))) 2)) (iterate inc 1)))
(def found (atom false))
(defn pentagonal? [a]
(let [y (/ (inc (Math/sqrt (inc (* 24 a)))) 6)
x (mod (* 10 y) 10)]
(if (zero? x)
true
false)))
(defn both-pent? [a b]
(let [sum (+ b a)
diff (- a b)]
(if (and (pentagonal? sum) (pentagonal? diff))
true
false)))
(defn start []
(doseq [x pentagonal-list :while (false? #found)]
(doseq [y pentagonal-list :while (<= y x)]
(if (both-pent? x y)
(do
(reset! found true)
(println (- x y)))))))
Even once the atom is set to true, your version can't stop running until the inner doseq finishes (until y > x). It will terminate the outer loop once the inner loop finishes though. It does terminate eventually when I run it. Not sure what you're seeing.
You don't need two doseqs to do this. One doseq can handle two seqs at once.
user> (doseq [x (range 0 2) y (range 3 6)] (prn [x y]))
[0 3]
[0 4]
[0 5]
[1 3]
[1 4]
[1 5]
(The same is true of for.) There is no mechanism for "breaking out" of nested doseqs that I know of, except throw/catch, but that's rather un-idiomatic. You don't need atoms or doseq for this at all though.
(def answers (filter (fn [[x y]] (both-pent? x y))
(for [x pentagonal-list
y pentagonal-list :while (<= y x)]
[x y])))
Your code is very imperative in style. "Loop over these lists, then test the values, then print something, then stop looping." Using atoms for control like this is not very idiomatic in Clojure.
A more functional way is to take a seq (pentagonal-list) and wrap it in functions that turn it into other seqs until you get a seq that gives you what you want. First I use for to turn two copies of this seqs into one seq of pairs where y <= x. Then I use filter to turn that seq into one that filters out the values we don't care about.
filter and for are lazy, so this will stop running once it finds the first valid value, if one is all you want. This returns the two numbers you want, and then you can subtract them.
(apply - (first answers))
Or you can further wrap the function in another map to calculate the differences for you.
(def answers2 (map #(apply - %) answers))
(first answers2)
There are some advantages to programming functionally in this way. The seq is cached (if you hold onto the head like I do here), so once a value is calculated, it remembers it and you can access it instantly from then on. Your version can't be run again without resetting the atom, and then would have to re-calculate everything. With my version you can (take 5 answers) to get the first 5 results, or map over the result to do other things if you want. You can doseq over it and print the values. Etc. etc.
I'm sure there are other (maybe better) ways to do this still without using an atom. You should usually avoid mutating references unless it's 100% necessary in Clojure.
The function names for altering atoms/agents/refs are different probably because the mechanics are not the same. Refs are synchronous and coordinated via transactions. Agents are asynchronous. Atoms are synchronous and uncoordinated. They all kind of "change a reference", and probably some kind of super-function or macro could wrap them all in one name, but that would obscure the fact that they're doing drastically different things under the hood. Explaining the differences fully is probably beyond the scope of an SO post to explain, but http://clojure.org fully explains all of the nuances of the differences.
OK. I've been tinkering with Clojure and I continually run into the same problem. Let's take this little fragment of code:
(let [x 128]
(while (> x 1)
(do
(println x)
(def x (/ x 2)))))
Now I expect this to print out a sequence starting with 128 as so:
128
64
32
16
8
4
2
Instead, it's an infinite loop, printing 128 over and over. Clearly my intended side effect isn't working.
So how am I supposed to redefine the value of x in a loop like this? I realize this may not be Lisp like (I could use an anonymous function that recurses on it's self, perhaps), but if I don't figure out how to set variable like this, I'm going to go mad.
My other guess would be to use set!, but that gives "Invalid assignment target", since I'm not in a binding form.
Please, enlighten me on how this is supposed to work.
def defines a toplevel var, even if you use it in a function or inner loop of some code. What you get in let are not vars. Per the documentation for let:
Locals created with let are not variables. Once created their values never change!
(Emphasis not mine.) You don't need mutable state for your example here; you could use loop and recur.
(loop [x 128]
(when (> x 1)
(println x)
(recur (/ x 2))))
If you wanted to be fancy you could avoid the explicit loop entirely.
(let [xs (take-while #(> % 1) (iterate #(/ % 2) 128))]
(doseq [x xs] (println x)))
If you really wanted to use mutable state, an atom might work.
(let [x (atom 128)]
(while (> #x 1)
(println #x)
(swap! x #(/ %1 2))))
(You don't need a do; while wraps its body in an explicit one for you.) If you really, really wanted to do this with vars you'd have to do something horrible like this.
(with-local-vars [x 128]
(while (> (var-get x) 1)
(println (var-get x))
(var-set x (/ (var-get x) 2))))
But this is very ugly and it's not idiomatic Clojure at all. To use Clojure effectively you should try to stop thinking in terms of mutable state. It will definitely drive you crazy trying to write Clojure code in a non-functional style. After a while you may find it to be a pleasant surprise how seldom you actually need mutable variables.
Vars (that's what you get when you "def" something) aren't meant to be reassigned (but can be):
user=> (def k 1)
#'user/k
user=> k
1
There's nothing stopping you from doing:
user=> (def k 2)
#'user/k
user=> k
2
If you want a thread-local settable "place" you can use "binding" and "set!":
user=> (def j) ; this var is still unbound (no value)
#'user/j
user=> j
java.lang.IllegalStateException: Var user/j is unbound. (NO_SOURCE_FILE:0)
user=> (binding [j 0] j)
0
So then you can write a loop like this:
user=> (binding [j 0]
(while (< j 10)
(println j)
(set! j (inc j))))
0
1
2
3
4
5
6
7
8
9
nil
But I think this is pretty unidiomatic.
If you think that having mutable local variables in pure functions would be a nice convenient feature that does no harm, because the function still remains pure, you might be interested in this mailing list discussion where Rich Hickey explains his reasons for removing them from the language. Why not mutable locals?
Relevant part:
If locals were variables, i.e. mutable, then closures could close over
mutable state, and, given that closures can escape (without some extra
prohibition on same), the result would be thread-unsafe. And people
would certainly do so, e.g. closure-based pseudo-objects. The result
would be a huge hole in Clojure's approach.
Without mutable locals, people are forced to use recur, a functional
looping construct. While this may seem strange at first, it is just as
succinct as loops with mutation, and the resulting patterns can be
reused elsewhere in Clojure, i.e. recur, reduce, alter, commute etc
are all (logically) very similar. Even though I could detect and
prevent mutating closures from escaping, I decided to keep it this way
for consistency. Even in the smallest context, non-mutating loops are
easier to understand and debug than mutating ones. In any case, Vars
are available for use when appropriate.
The majority of the subsequent posts concerns implementing a with-local-vars macro ;)
You could more idiomatically use iterate and take-while instead,
user> (->> 128
(iterate #(/ % 2))
(take-while (partial < 1)))
(128 64 32 16 8 4 2)
user>