Two related questions about sequence:
Given a transducer, e.g. (def xf (comp (filter odd?) (map inc))),
What's the relationship between (into [] xf (range 10)) or (into () xf (range 10)), and (sequence xf (range 10))? Is it just that there's no syntax for a lazy sequence that can be used as the second argument of into, so we need a separate function sequence for this purpose? (I know that sequence has another, non-transducer use, coercing a collection into a sequence of one kind or another.)
The Clojure transducers page says, about uses of sequence like the one above,
The resulting sequence elements are incrementally computed. These sequences will consume input incrementally as needed and fully realize intermediate operations. This behavior differs from the equivalent operations on lazy sequences.
To me that sounds as if sequence doesn't return a lazy sequence, yet the docstring for sequence says "When a transducer is supplied, returns a lazy sequence of applications of the transform to the items in coll(s), ....", and in fact (class (sequence xf (range 10))) returns clojure.lang.LazySeq. I think I don't understand the last sentence quoted above from the Clojure transducers page.
(sequence xform from) creates lazy-seq (RT.chunkIteratorSeq) over TransformerIterator to which xform and from are passed. When next value is requested, xform (composition of transformations) is invoked over next value from from.
This behavior differs from the equivalent operations on lazy sequences.
What would be equivalent operations on lazy sequences? With your xf as an example,
applying filter odd? to (range 10), producing intermediate lazy sequence, and applying map inc to intermediate lazy sequence, producing final lazy sequence as result.
I would say that (into to xform from) is similar to (into to (sequence xform from)) when from is some collection which does not implement IReduceInit.
into internally uses (transduce xform conj to from) which does the
same as (reduce (xform conj) to from) and at the end clojure.core.protocols/coll-reduce is called:
(into [] (sequence xf (range 10)))
;[2 4 6 8 10]
(into [] xf (range 10))
;[2 4 6 8 10]
(transduce xf conj [] (range 10))
;[2 4 6 8 10]
(reduce (xf conj) [] (range 10))
;[2 4 6 8 10]
I modified a bit your transducer into:
(defn hof-pr
"Prints char c on each invocation of function f within higher order function"
([hof f c]
(hof (fn [e] (print c) (f e))))
([hof f c coll]
(hof (fn [e] (print c) (f e)) coll)))
(def map-inc-pr (partial hof-pr map inc \m))
(def filter-odd-pr (partial hof-pr filter odd? \f))
(def xf (comp (filter-odd-pr) (map-inc-pr)))
so that it prints out character on each transformation step.
Create s1 in REPL as follows:
(def s1 (into [] xf (range 10)))
ffmffmffmffmffm
s1 is eagerly evaluated (printed f for filtering and m for mapping). No evaluation when s1 is requested again:
s1
[2 4 6 8 10]
Let's create s2:
(def s2 (sequence xf (range 10)))
ffm
Only first item in s2 is evaluated. Next items will be evaluated when requested:
s2
ffmffmffmffm(2 4 6 8 10)
Additionally, create s3, old way:
(def s3 (map-inc-pr (filter-odd-pr (range 10))))
s3
ffffffffffmmmmm(2 4 6 8 10)
As you can see, no evaluation when s3 is defined. When s3 is requested, filtering over 10 elements is applied and after that mapping over remaining 5 elements is applied, producing final sequence.
I didn't find the current answer clear enough, so here goes...
sequence does return a LazySeq, but it is a chunked one, so when you play around with it in the REPL, you will often have the impression it is eager, because your collection will probably be too small, and the chunking will make it look eager. The chunk size I think is a bit dynamic, and it won't always be exactly the same size chunks, but in general it seems to be of size 32. So your transducer will be applied to the input collection 32 elements at a time, lazily.
Here's a simple transducer that just prints the elements it reduces over and returns them untouched:
(defn printer
[xf]
(fn
([] (xf))
([result] (xf result))
([result input]
(println input)
(xf result input))))
If we create a sequence s of 100 elements with it:
(def s
(sequence
printer
(range 100)))
;;> 0
We see that it prints 0, but nothing else. On the call to sequence, the first element will thus be consumed from (range 100), and it will be passed to the xf chain to be transformed, which in our case just prints it. No other elements except the first one have thus been consumed yet.
Now if we take one element from s:
(take 1 s)
;;> 0
;;> 1
;;> 2
;;> 3
;;> 4
;;> 5
;;> 6
;;> 7
;;> 8
;;> 9
;;> 10
;;> 11
;;> 12
;;> 13
;;> 14
;;> 15
;;> 16
;;> 17
;;> 18
;;> 19
;;> 20
;;> 21
;;> 22
;;> 23
;;> 24
;;> 25
;;> 26
;;> 27
;;> 28
;;> 29
;;> 30
;;> 31
;;> 32
We see that it printed the first 32 elements. This is the normal behavior of chunked lazy sequence in Clojure. You can think of it as semi-lazy, in that it consumes chunk-size elements at a time, instead of 1 at a time.
Now if we try to take any element from 1 to 32, nothing else will be printed, because the first 32 elements have already been processed:
(take 1 s)
;; => (0)
(take 10 s)
;; => (0 1 2 3 4 5 6 7 8 9)
(take 24 s)
;; => (0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23)
(take 32 s)
;; => (0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31)
Nothing gets printed, and each take returns the expected set of result. I'm using ;; => for return values, and ;;> for printed output.
Okay, now if we take the 33rd element, we expect to see the next chunk of 32 elements being printed:
(take 33 s)
;;> 33
;;> 34
;;> 35
;;> 36
;;> 37
;;> 38
;;> 39
;;> 40
;;> 41
;;> 42
;;> 43
;;> 44
;;> 45
;;> 46
;;> 47
;;> 48
;;> 49
;;> 50
;;> 51
;;> 52
;;> 53
;;> 54
;;> 55
;;> 56
;;> 57
;;> 58
;;> 59
;;> 60
;;> 61
;;> 62
;;> 63
;;> 64
Awesome! So once more, we see that only the next 32 were taken, which brings us to a total of 64 elements now processed.
Well, this demonstrates that sequence called with a transducer does in fact creates a lazy chunked sequence where elements will only be processed when needed (chunk-size at a time).
So what's this about?:
The resulting sequence elements are incrementally computed. These sequences will consume input incrementally as needed and fully realize intermediate operations. This behavior differs from the equivalent operations on lazy sequences.
This is about the order in which the operations happen. With sequence and a transducer:
(sequence (comp A B C) coll)
Will for each elements in the chunk have them go through: A -> B -> C, so you get:
A(e1) -> B(e1) -> C(e1)
A(e2) -> B(e2) -> C(e2)
...
A(e32) -> B(e32) -> C(e32)
While for a normal lazy seq like:
(->> coll A B C)
Will first have all chunked elements go through A, and then have them all go through B and then C:
A(e1)
A(e2)
...
A(e32)
|
B(e1)
B(e2)
...
B(e32)
|
C(e1)
C(e2)
...
C(e32)
This requires an intermediate collection between each step, as the result of A have to be collected into a collection to then loop over and apply B, etc.
We can see this with our previous example:
(def s
(sequence
(comp (filter odd?)
printer
(map vector)
printer)
(range 10)))
(take 1 s)
;;> 1
;;> [1]
;;> 3
;;> [3]
;;> 5
;;> [5]
;;> 7
;;> [7]
;;> 9
;;> [9]
(def l
(->> (range 10)
(filter odd?)
(map #(do (println %) %))
(map vector)
(map #(do (println %) %))))
(take 1 l)
;;> 1
;;> 3
;;> 5
;;> 7
;;> 9
;;> [1]
;;> [3]
;;> [5]
;;> [7]
;;> [9]
See how the first will filter -> vector -> filter -> vector, etc. While the second will filter all -> vector all. Well this is what the quote from the doc means.
Now one more thing, there is a difference in how the chunking is applied as well between the two. With sequence and a transducer, it will process elements until the transducer result has chunk-size count of elements. While in the lazy-seq case, it will process in chunks at each level until all steps have enough for what they need to do.
Here's what I mean:
(def s
(sequence
(comp printer
(filter odd?))
(range 100)))
(take 1 s)
;;> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
(def l
(->> (range 100)
(map #(do (print % "") %))
(filter odd?)))
(take 1 l)
;;> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
Here I modified the printing logic to be on the same line, so it doesn't take as much space. And if you look closely, s processed 66 elements of the input range, while l only consumed 32 elements.
The reason for this is what I said above. With sequence, we will continue taking in chunks until we have chunk-size number of results. In this case, the chunk-size is 32, and since we filter on odd?, it takes us two chunks to reach 32 results.
With lazy-seq, it doesn't try and grab the first chunk of results, but only enough chunks from the input to satisfy the logic, in this case, that only needs one chunk of 32 elements from the input for us to find a single odd number to take.
I would also like the changed value to be random. For example
'(1 2 3 4 5)
one possible output.
'(1 3 3 4 5)
another
'(1 5 5 4 5)
there are more idiomatic ways to do this in clojure. For example this one:
you can generate infinite lazy sequence of random changes to the initial collection, and then just take a random item from it.
(defn random-changes [items limit]
(rest (reductions #(assoc %1 (rand-int (count items)) %2)
(vec items)
(repeatedly #(rand-int limit)))))
in repl:
user> (take 5 (random-changes '(1 2 3 4 5 6 7 8) 100))
([1 2 3 4 5 64 7 8] [1 2 3 4 5 64 58 8] [1 2 3 4 5 64 58 80]
[1 2 3 4 5 28 58 80] [1 2 3 71 5 28 58 80])
user> (nth (random-changes '(1 2 3 4 5 6 7 8) 100) 0)
[1 2 3 64 5 6 7 8]
and you can just take an item at the index you want (so it means collection with index + 1 changes).
user> (nth (random-changes '(1 2 3 4 5 6 7 8) 100) (rand-int 3))
[1 46 3 44 86 6 7 8]
or just use reduce to take the n times changed coll at once:
(defn random-changes [items limit changes-count]
(reduce #(assoc %1 (rand-int (count items)) %2)
(vec items)
(repeatedly changes-count #(rand-int limit))))
in repl:
user> (random-changes [1 2 3 4 5 6] 100 3)
[27 2 33 4 76 6]
also you can just associate all the changes in a vector at once:
(assoc items 0 100 1 200 2 300), so you can do it like that:
(defn random-changes [items limit changes-count]
(let [items (vec items)
rands #(repeatedly changes-count (partial rand-int %))]
(apply assoc items
(interleave (rands (count items))
(rands limit)))))
in repl:
user> (random-changes [1 2 3 4 5 6] 100 3)
[1 65 61 44 5 6]
Figured it out. Decided to go a longer route and make a function.
(defn changeSequence
[sequ x]
(def transsequ (into [] sequ))
(if (> x 0)
(changeSequence (assoc transsequ (rand-int (count transsequ)) (rand-int foo)) (dec x))
(seq sequ)
))
For example
[1 2 3 40 7 30 31 32 41]
after filtering should be
[1 2 3 30 31 32 41]
The problem doesn't seem very simple because I'd like to maximize the size of the resulting vector, so that if the starting vector is
[1 2 3 40 30 31 32 41 29]
I prefer this result
[1 2 3 30 31 32 41]
than just
[1 2 3 29]
Your problem is known as the longest increasing subsequence.
Via rosetta code:
(defn place [piles card]
(let [[les gts] (->> piles (split-with #(<= (ffirst %) card)))
newelem (cons card (->> les last first))
modpile (cons newelem (first gts))]
(concat les (cons modpile (rest gts)))))
(defn a-longest [cards]
(let [piles (reduce place '() cards)]
(->> piles last first reverse)))
(a-longest [1 2 3 40 30 31 32 41 29])
;; => (1 2 3 30 31 32 41)
Could probably be optimized to use transients if you care about performance.
I have a vector of 4 numbers: [11 23 37 55]; I want to produce a sequence with 3 numbers where each of them is the result of the difference between the n+1 and the n element: ( (23-11) (37-23) (55-37)) = (12 14 28)
How can I do that in clojure?
Thx
This can be done easily with map.
user=> (def v [11 23 37 55])
#'user/v
user=> (map - (rest v) v)
(12 14 18)
when it gets more than two args, it takes elements from each sequence as the positional arguments to the function.
In clojure, I would like to calculate several subvectors out of a big lazy sequence (maybe an infinite one).
The naive way would be to transform the lazy sequence into a vector and then to calculate the subvectors. But when doing that, I am losing the laziness.
I have a big sequence big-sequence and positions, a list of start and end positions. I would like to do the following calculation but lazilly:
(let [positions '((5 7) (8 12) (18 27) (28 37) (44 47))
big-sequence-in-vec (vec big-sequence)]
(map #(subvec big-sequence-in-vec (first %) (second %)) positions))
; ([5 6] [8 9 10 11] [18 19 20 21 22 23 24 25 26] [28 29 30 31 32 33 34 35 36] [44 45 46])
Is it feasible?
Remark: If big-sequence is infinite, vec will never return!
You are asking for a lazy sequence of sub-vectors of a lazy sequence. We can develop it layer by layer as follows.
(defn sub-vectors [spans c]
(let [starts (map first spans) ; the start sequence of the spans
finishes (map second spans) ; the finish sequence of the spans
drops (map - starts (cons 0 starts)) ; the incremental numbers to drop
takes (map - finishes starts) ; the numbers to take
tails (next (reductions (fn [s n] (drop n s)) c drops)) ; the sub-sequences from which the sub-vectors will be taken from the front of
slices (map (comp vec take) takes tails)] ; the sub-vectors
slices))
For example, given
(def positions '((5 7) (8 12) (18 27) (28 37) (44 47)))
we have
(sub-vectors positions (range))
; ([5 6] [8 9 10 11] [18 19 20 21 22 23 24 25 26] [28 29 30 31 32 33 34 35 36] [44 45 46])
Both the spans and the basic sequence are treated lazily. Both can be infinite.
For example,
(take 10 (sub-vectors (partition 2 (range)) (range)))
; ([0] [2] [4] [6] [8] [10] [12] [14] [16] [18])
This works out #schauho's suggestion in a form that is faster than #alfredx's solution, even as improved by OP. Unlike my previous solution, it does not assume that the required sub-vectors are sorted.
The basic tool is an eager analogue of split-at:
(defn splitv-at [n v tail]
(if (and (pos? n) (seq tail))
(recur (dec n) (conj v (first tail)) (rest tail))
[v tail]))
This removes the first n items from tail, appending them to vector v, returning the new v and tail as a vector. We use this to capture just as much more of the big sequence in the vector as is necessary to supply each sub-vector as it comes along.
(defn sub-spans [spans coll]
(letfn [(sss [spans [v tail]]
(lazy-seq
(when-let [[[from to] & spans-] (seq spans)]
(let [[v- tail- :as pair] (splitv-at (- to (count v)) v tail)]
(cons (subvec v- from to) (sss spans- pair))))))]
(sss spans [[] coll])))
For example
(def positions '((8 12) (5 7) (18 27) (28 37) (44 47)))
(sub-spans positions (range))
; ([8 9 10 11] [5 6] [18 19 20 21 22 23 24 25 26] [28 29 30 31 32 33 34 35 36] [44 45 46])
Since subvec works in short constant time, it takes linear time in the
amount of the big sequence consumed.
Unlike my previous solution, it does not forget its head: it keeps
all of the observed big sequence in memory.
(defn pos-pair-to-vec [[start end] big-sequence]
(vec (for [idx (range start end)]
(nth big-sequence idx))))
(let [positions '((5 7) (8 12) (18 27) (28 37) (44 47))
big-seq (range)]
(map #(pos-pair-to-vec % big-seq) positions))
You could use take on the big sequence with the maximum of the positions. You need to compute the values up to this point anyway to compute the subvectors, so you don't really "lose" anything.
The trick is to write a lazy version of subvec using take and drop:
(defn subsequence [coll start end]
(->> (drop start coll)
(take (- end start))))
(let [positions '((5 7) (8 12) (18 27) (28 37) (44 47))
big-sequence (range)]
(map (fn [[start end]] (subsequence big-sequence start end)) positions))
;((5 6) (8 9 10 11) (18 19 20 21 22 23 24 25 26) (28 29 30 31 32 33 34 35 36) (44 45 46))