In clojure, this is valid:
(loop [a 5]
(if (= a 0)
"done"
(recur (dec a))))
However, this is not:
(let [a 5]
(if (= a 0)
"done"
(recur (dec a))))
So I'm wondering: why are loop and let separated, given the fact they both (at least conceptually) introduce lexical bindings? That is, why is loop a recur target while let is not?
EDIT: originally wrote "loop target" which I noticed is incorrect.
Consider the following example:
(defn pascal-step [v n]
(if (pos? n)
(let [l (concat v [0])
r (cons 0 v)]
(recur (map + l r) (dec n)))
v))
This function calculates n+mth line of pascal triangle by given mth line.
Now, imagine, that let is a recur target. In this case I won't be able to recursively call the pascal-step function itself from let binding using recur operator.
Now let's make this example a little bit more complex:
(defn pascal-line [n]
(loop [v [1]
i n]
(if (pos? i)
(let [l (concat v [0])
r (cons 0 v)]
(recur (map + l r) (dec i)))
v)))
Now we're calculating nth line of a pascal triangle. As you can see, I need both loop and let here.
This example is quite simple, so you may suggest removing let binding by using (concat v [0]) and (cons 0 v) directly, but I'm just showing you the concept. There may be a more complex examples where let inside a loop is unavoidable.
Related
problem formulation
Informally speaking, I want to write a function which, taking as input a function that generates binary factorizations and an element (usually neutral), creates an arbitrary length factorization generator. To be more specific, let us first define the function nfoldr in Clojure.
(defn nfoldr [f e]
(fn rec [n]
(fn [s]
(if (zero? n)
(if (empty? s) e)
(if (seq s)
(if-some [x ((rec (dec n)) (rest s))]
(f (list (first s) x))))))))
Here nil is used with the meaning "undefined output, input not in function's domain". Additionally, let us view the inverse relation of a function f as a set-valued function defining inv(f)(y) = {x | f(x) = y}.
I want to define a function nunfoldr such that inv(nfoldr(f , e)(n)) = nunfoldr(inv(f) , e)(n) when for every element y inv(f)(y) is finite, for each binary function f, element e and natural number n.
Moreover, I want the factorizations to be generated as lazily as possible, in a 2-dimensional sense of laziness. My goal is that, when getting some part of a factorization for the first time, there does not happen (much) computation needed for next parts or next factorizations. Similarly, when getting one factorization for the first time, there does not happen computation needed for next ones, whereas all the previous ones get in effect fully realized.
In an alternative formulation we can use the following longer version of nfoldr, which is equivalent to the shorter one when e is a neutral element.
(defn nfoldr [f e]
(fn [n]
(fn [s]
(if (zero? n)
(if (empty? s) e)
((fn rec [n]
(fn [s]
(if (= 1 n)
(if (and (seq s) (empty? (rest s))) (first s))
(if (seq s)
(if-some [x ((rec (dec n)) (rest s))]
(f (list (first s) x)))))))
n)))))
a special case
This problem is a generalization of the problem of generating partitions described in that question. Let us see how the old problem can be reduced to the current one. We have for every natural number n:
npt(n) = inv(nconcat(n)) = inv(nfoldr(concat2 , ())(n)) = nunfoldr(inv(concat2) , ())(n) = nunfoldr(pt2 , ())(n)
where:
npt(n) generates n-ary partitions
nconcat(n) computes n-ary concatenation
concat2 computes binary concatenation
pt2 generates binary partitions
So the following definitions give a solution to that problem.
(defn generate [step start]
(fn [x] (take-while some? (iterate step (start x)))))
(defn pt2-step [[x y]]
(if (seq y) (list (concat x (list (first y))) (rest y))))
(def pt2-start (partial list ()))
(def pt2 (generate pt2-step pt2-start))
(def npt (nunfoldr pt2 ()))
I will summarize my story of solving this problem, using the old one to create example runs, and conclude with some observations and proposals for extension.
solution 0
At first, I refined/generalized the approach I took for solving the old problem. Here I write my own versions of concat and map mainly for a better presentation and, in the case of concat, for some added laziness. Of course we can use Clojure's versions or mapcat instead.
(defn fproduct [f]
(fn [s]
(lazy-seq
(if (and (seq f) (seq s))
(cons
((first f) (first s))
((fproduct (rest f)) (rest s)))))))
(defn concat' [s]
(lazy-seq
(if (seq s)
(if-let [x (seq (first s))]
(cons (first x) (concat' (cons (rest x) (rest s))))
(concat' (rest s))))))
(defn map' [f]
(fn rec [s]
(lazy-seq
(if (seq s)
(cons (f (first s)) (rec (rest s)))))))
(defn nunfoldr [f e]
(fn rec [n]
(fn [x]
(if (zero? n)
(if (= e x) (list ()) ())
((comp
concat'
(map' (comp
(partial apply map)
(fproduct (list
(partial partial cons)
(rec (dec n))))))
f)
x)))))
In an attempt to get inner laziness we could replace (partial partial cons) with something like (comp (partial partial concat) list). Although this way we get inner LazySeqs, we do not gain any effective laziness because, before consing, most of the computation required for fully realizing the rest part takes place, something that seems unavoidable within this general approach. Based on the longer version of nfoldr, we can also define the following faster version.
(defn nunfoldr [f e]
(fn [n]
(fn [x]
(if (zero? n)
(if (= e x) (list ()) ())
(((fn rec [n]
(fn [x] (println \< x \>)
(if (= 1 n)
(list (list x))
((comp
concat'
(map' (comp
(partial apply map)
(fproduct (list
(partial partial cons)
(rec (dec n))))))
f)
x))))
n)
x)))))
Here I added a println call inside the main recursive function to get some visualization of eagerness. So let us demonstrate the outer laziness and inner eagerness.
user=> (first ((npt 5) (range 3)))
< (0 1 2) >
< (0 1 2) >
< (0 1 2) >
< (0 1 2) >
< (0 1 2) >
(() () () () (0 1 2))
user=> (ffirst ((npt 5) (range 3)))
< (0 1 2) >
< (0 1 2) >
< (0 1 2) >
< (0 1 2) >
< (0 1 2) >
()
solution 1
Then I thought of a more promising approach, using the function:
(defn transpose [s]
(lazy-seq
(if (every? seq s)
(cons
(map first s)
(transpose (map rest s))))))
To get the new solution we replace the previous argument in the map' call with:
(comp
(partial map (partial apply cons))
transpose
(fproduct (list
repeat
(rec (dec n)))))
Trying to get inner laziness we could replace (partial apply cons) with #(cons (first %) (lazy-seq (second %))) but this is not enough. The problem lies in the (every? seq s) test inside transpose, where checking a lazy sequence of factorizations for emptiness (as a stopping condition) results in realizing it.
solution 2
A first way to tackle the previous problem that came to my mind was to use some additional knowledge about the number of n-ary factorizations of an element. This way we can repeat a certain number of times and use only this sequence for the stopping condition of transpose. So we will replace the test inside transpose with (seq (first s)), add an input count to nunfoldr and replace the argument in the map' call with:
(comp
(partial map #(cons (first %) (lazy-seq (second %))))
transpose
(fproduct (list
(partial apply repeat)
(rec (dec n))))
(fn [[x y]] (list (list ((count (dec n)) y) x) y)))
Let us turn to the problem of partitions and define:
(defn npt-count [n]
(comp
(partial apply *)
#(map % (range 1 n))
(partial comp inc)
(partial partial /)
count))
(def npt (nunfoldr pt2 () npt-count))
Now we can demonstrate outer and inner laziness.
user=> (first ((npt 5) (range 3)))
< (0 1 2) >
(< (0 1 2) >
() < (0 1 2) >
() < (0 1 2) >
() < (0 1 2) >
() (0 1 2))
user=> (ffirst ((npt 5) (range 3)))
< (0 1 2) >
()
However, the dependence on additional knowledge and the extra computational cost make this solution unacceptable.
solution 3
Finally, I thought that in some crucial places I should use a kind of lazy sequences "with a non-lazy end", in order to be able to check for emptiness without realizing. An empty such sequence is just a non-lazy empty list and overall they behave somewhat like the lazy-conss of the early days of Clojure. Using the definitions given below we can reach an acceptable solution, which works under the assumption that always at least one of the concat'ed sequences (when there is one) is non-empty, something that holds in particular when every element has at least one binary factorization and we are using the longer version of nunfoldr.
(def lazy? (partial instance? clojure.lang.IPending))
(defn empty-eager? [x] (and (not (lazy? x)) (empty? x)))
(defn transpose [s]
(lazy-seq
(if-not (some empty-eager? s)
(cons
(map first s)
(transpose (map rest s))))))
(defn concat' [s]
(if-not (empty-eager? s)
(lazy-seq
(if-let [x (seq (first s))]
(cons (first x) (concat' (cons (rest x) (rest s))))
(concat' (rest s))))
()))
(defn map' [f]
(fn rec [s]
(if-not (empty-eager? s)
(lazy-seq (cons (f (first s)) (rec (rest s))))
())))
Note that in this approach the input function f should produce lazy sequences of the new kind and the resulting n-ary factorizer will also produce such sequences. To take care of the new input requirement, for the problem of partitions we define:
(defn pt2 [s]
(lazy-seq
(let [start (list () s)]
(cons
start
((fn rec [[x y]]
(if (seq y)
(lazy-seq
(let [step (list (concat x (list (first y))) (rest y))]
(cons step (rec step))))
()))
start)))))
Once again, let us demonstrate outer and inner laziness.
user=> (first ((npt 5) (range 3)))
< (0 1 2) >
< (0 1 2) >
(< (0 1 2) >
() < (0 1 2) >
() < (0 1 2) >
() () (0 1 2))
user=> (ffirst ((npt 5) (range 3)))
< (0 1 2) >
< (0 1 2) >
()
To make the input and output use standard lazy sequences (sacrificing a bit of laziness), we can add:
(defn lazy-end->eager-end [s]
(if (seq s)
(lazy-seq (cons (first s) (lazy-end->eager-end (rest s))))
()))
(defn eager-end->lazy-end [s]
(lazy-seq
(if-not (empty-eager? s)
(cons (first s) (eager-end->lazy-end (rest s))))))
(def nunfoldr
(comp
(partial comp (partial comp eager-end->lazy-end))
(partial apply nunfoldr)
(fproduct (list
(partial comp lazy-end->eager-end)
identity))
list))
observations and extensions
While creating solution 3, I observed that the old mechanism for lazy sequences in Clojure might not be necessarily inferior to the current one. With the transition, we gained the ability to create lazy sequences without any substantial computation taking place but lost the ability to check for emptiness without doing the computation needed to get one more element. Because both of these abilities can be important in some cases, it would be nice if a new mechanism was introduced, which would combine the advantages of the previous ones. Such a mechanism could use again an outer LazySeq thunk, which when forced would return an empty list or a Cons or another LazySeq or a new LazyCons thunk. This new thunk when forced would return a Cons or perhaps another LazyCons. Now empty? would force only LazySeq thunks while first and rest would also force LazyCons. In this setting map could look like this:
(defn map [f s]
(lazy-seq
(if (empty? s) ()
(lazy-cons
(cons (f (first s)) (map f (rest s)))))))
I have also noticed that the approach taken from solution 1 onwards lends itself to further generalization. If inside the argument in the map' call in the longer nunfoldr we replace cons with concat, transpose with some implementation of Cartesian product and repeat with another recursive call, we can then create versions that "split at different places". For example, using the following as the argument we can define a nunfoldm function that "splits in the middle" and corresponds to an easy-to-imagine nfoldm. Note that all "splitting strategies" are equivalent when f is associative.
(comp
(partial map (partial apply concat))
cproduct
(fproduct (let [n-half (quot n 2)]
(list (rec n-half) (rec (- n n-half))))))
Another natural modification would allow for infinite factorizations. To achieve this, if f generated infinite factorizations, nunfoldr(f , e)(n) should generate the factorizations in an order of type ω, so that each one of them could be produced in finite time.
Other possible extensions include dropping the n parameter, creating relational folds (in correspondence with the relational unfolds we consider here) and generically handling algebraic data structures other than sequences as input/output. This book, which I have just discovered, seems to contain valuable relevant information, given in a categorical/relational language.
Finally, to be able to do this kind of programming more conveniently, we could transfer it into a point-free, algebraic setting. This would require constructing considerable "extra machinery", in fact almost making a new language. This paper demonstrates such a language.
I defined a function is-prime? in Clojure that returns if a number is prime or not, and I am trying to define a function prime-seq that returns all the prime numbers between two numbersn and m.
I have created the function in Common Lisp since I am more comfortable with it and I am trying to translate the code to Clojure. However, I cannot find how to replace the collect function in Lisp to Clojure.
This is my prime-seq function in Lisp:
(defun prime-seq (i j)
(loop for x from i to j
when (is-prime x)
collect x
)
)
And this is the try I did in Clojure but it is not working:
(defn prime-seq? [n m]
(def list ())
(loop [k n]
(cond
(< k m) (if (prime? k) (def list (conj list k)))
)
)
(println list)
)
Any ideas?
loop in Clojure is not the same as CL loop. You probably want for:
(defn prime-seq [i j]
(for [x (range i j)
:when (is-prime x)]
x))
Which is basically the same as saying:
(defn prime-seq [i j]
(filter is-prime (range i j)))
which may be written using the ->> macro for readability:
(defn prime-seq [i j]
(->> (range i j)
(filter is-prime)))
However, you might actually want a lazy-sequence of all prime numbers which you could write with something like this:
(defonce prime-seq
(let [c (fn [m numbers] (filter #(-> % (mod m) (not= 0)) numbers))
f (fn g [s]
(when (seq s)
(cons (first s)
(lazy-seq (g (c (first s) (next s)))))))]
(f (iterate inc 2))))
The lazy sequence will cache the results of the previous calculation, and you can use things like take-while and drop-while to filter the sequence.
Also, you probably shouldn't be using def inside a function call like that. def is for defining a var, which is essentially global. Then using def to change that value completely destroys the var and replaces it with another var pointing to the new state. It's something that's allowed to enable iterative REPL based development and shouldn't really be used in that way. Var's are designed to isolate changes locally to a thread, and are used as containers for global things like functions and singletons in your system. If the algorithm you're writing needs a local mutable state you could use a transient or an atom, and define a reference to that using let, but it would be more idiomatic to use the sequence processing lib or maybe a transducer.
Loop works more like a tail recursive function:
(defn prime-seq [i j]
(let [l (transient [])]
(loop [k i]
(when (< k j)
(when (is-prime k)
(conj! l k))
(recur (inc k))))
(persistent! l)))
But that should be considered strictly a performance optimisation. The decision to use transients shouldn't be taken lightly, and it's often best to start with a more functional algorithm, benchmark and optimise accordingly. Here is a way to write the same thing without the mutable state:
(defn prime-seq [i j]
(loop [k i
l []]
(if (< k j)
(recur (inc k)
(if (is-prime k)
(conj l k)
l))
l)))
I'd try to use for:
(for [x (range n m) :when (is-prime? x)] x)
Trying to define a factors function that will return a vector of all the factors of a number using loop/recur.
;; `prime?` borrowed from https://swizec.com/blog/comparing-clojure-and-node-js-for-speed/swizec/1593
(defn prime? [n]
(if (even? n) false
(let [root (num (int (Math/sqrt n)))]
(loop [i 3] (if (> i root) true
(if (zero? (mod n i)) false
(recur (+ i 2))))))))
(defn factors [x] (
(loop [n x i 2 acc []]
(if (prime? n) (conj acc n)
(if (zero? (mod n i)) (recur (/ n i) 2 (conj acc i))
(recur n (inc i) acc))))))
But I keep running into the following error:
ArityException Wrong number of args (0) passed to: PersistentVector clojure.lang.AFn.throwArity
I must be missing something obvious here. Any suggestions are much appreciated!
Let me move the whitespace in your code so it's obvious to you what is wrong:
(defn factors [x]
((loop [n x i 2 acc []]
(if (prime? n) (conj acc n)
(if (zero? (mod n i)) (recur (/ n i) 2 (conj acc i))
(recur n (inc i) acc))))))
You see that weird (( at the start of your function? What's that all about? Remember that in Clojure, as in lisps in general, parentheses are not a grouping construct! They are a function-call mechanism, and you can't just throw extras in for fun. Here, what you wrote has the following meaning:
Run this loop that will compute a vector.
Call the resulting value as a function, passing it no arguments.
I'm working on 4clojure problems and a similar issue keeps coming up. I'll write a solution that works for all but one of the test cases. It's usually the one that is checking for lazy evaluation. The solution below works for all but the last test case. I've tried all kinds of solutions and can't seem to get it to stop evaluating until integer overflow. I read the chapter on lazy sequences in Joy of Clojure, but I'm having a hard time implementing them. Is there a rule of thumb I'm forgetting, like don't use loop or something like that?
; This version is non working at the moment, will try to edit a version that works
(defn i-between [p k coll]
(loop [v [] coll coll]
(let [i (first coll) coll (rest coll) n (first coll)]
(cond (and i n)
(let [ret (if (p i n) (cons k (cons i v)) (cons i v))]
(recur ret coll))
i
(cons i v )
:else v))))
Problem 132
Ultimate solution for those curious:
(fn i-between [p k coll]
(letfn [(looper [coll]
(if (empty? coll) coll
(let [[h s & xs] coll
c (cond (and h s (p h s))
(list h k )
(and h s)
(list h )
:else (list h))]
(lazy-cat c (looper (rest coll))))
))] (looper coll)))
When I think about lazy sequences, what usually works is thinking about incremental cons'ing
That is, each recursion step only adds a single element to the list, and of course you never use loop.
So what you have is something like this:
(cons (generate first) (recur rest))
When wrapped on lazy-seq, only the needed elements from the sequence are realized, for instance.
(take 5 (some-lazy-fn))
Would only do 5 recursion calls to realize the needed elements.
A tentative, far from perfect solution to the 4clojure problem, that demonstrates the idea:
(fn intercalate
[pred value col]
(letfn [(looper [s head]
(lazy-seq
(if-let [sec (first s)]
(if (pred head sec)
(cons head (cons value (looper (rest s) sec)))
(cons head (looper (rest s) sec)))
(if head [head] []))))]
(looper (rest col) (first col))))
There, the local recursive function is looper, for each element tests if the predicate is true, in that case realizes two elements(adds the interleaved one), otherwise realize just one.
Also, you can avoid recursion using higher order functions
(fn [p v xs]
(mapcat
#(if (p %1 %2) [%1 v] [%1])
xs
(lazy-cat (rest xs) (take 1 xs))))
But as #noisesmith said in the comment, you're just calling a function that calls lazy-seq.
I implemented a naive solution for printing a Pascal's Triangle of N depth which I'll include below. My question is, in what ways could this be improved to make it more idiomatic? I feel like there are a number of things that seem overly verbose or awkward, for example, this if block feels unnatural: (if (zero? (+ a b)) 1 (+ a b)). Any feedback is appreciated, thank you!
(defn add-row [cnt acc]
(let [prev (last acc)]
(loop [n 0 row []]
(if (= n cnt)
row
(let [a (nth prev (- n 1) 0)
b (nth prev n 0)]
(recur (inc n) (conj row (if (zero? (+ a b)) 1 (+ a b)))))))))
(defn pascals-triangle [n]
(loop [cnt 1 acc []]
(if (> cnt n)
acc
(recur (inc cnt) (conj acc (add-row cnt acc))))))
(defn pascal []
(iterate (fn [row]
(map +' `(0 ~#row) `(~#row 0)))
[1]))
Or if you're going for maximum concision:
(defn pascal []
(->> [1] (iterate #(map +' `(0 ~#%) `(~#% 0)))))
To expand on this: the higher-order-function perspective is to look at your original definition and realize something like: "I'm actually just computing a function f on an initial value, and then calling f again, and then f again...". That's a common pattern, and so there's a function defined to cover the boring details for you, letting you just specify f and the initial value. And because it returns a lazy sequence, you don't have to specify n now: you can defer that, and work with the full infinite sequence, with whatever terminating condition you want.
For example, perhaps I don't want the first n rows, I just want to find the first row whose sum is a perfect square. Then I can just (first (filter (comp perfect-square? sum) (pascal))), without having to worry about how large an n I'll need to choose up front (assuming the obvious definitions of perfect-square? and sum).
Thanks to fogus for an improvement: I need to use +' rather than just + so that this doesn't overflow when it gets past Long/MAX_VALUE.
(defn next-row [row]
(concat [1] (map +' row (drop 1 row)) [1]))
(defn pascals-triangle [n]
(take n (iterate next-row '(1))))
Not as terse as the others, but here's mine:)
(defn A []
(iterate
(comp (partial map (partial reduce +))
(partial partition-all 2 1) (partial cons 0))
[1]))