I know there is the reduced function to terminate such an infinite thing, but i am curious why in the second version (with range without arg) it doesn't terminate the reduction as it reaches 150?
user=> (reduce (fn [a v] (if (< a 100) (+ a v) a)) (range 2000))
105
user=> (reduce (fn [a v] (if (< a 100) (+ a v) a)) (range))
As you mention, and for those who come along later googling for reduced. The reducing function does have a the ability to declare the final answer of the reduction explicitly with the guarantee that no further input will be consumed by returning the result of calling (reduced the-final-answer)
user> (reduce (fn [a v]
(if (< a 100)
(+ a v)
(reduced a)))
(range))
105
In this case when the new collected result passes 100 the next iteration will stop the reduction rather than contribute it's value to the answer. This does consume one extra value from the input stream that is not included in the result.
user> (reduce (fn [a v]
(let [res (+ a v)]
(if (< res 100)
res
(reduced res))))
(range))
105
This finishes the reduction as soon as threshold is met and does not consume any extra values from the lazy (and infinite) collection.
Because, reduce applies the function to every element in the sequence (range), thus (range) is fully realized.
(range)
produces an infinite sequence, and
(fn [a v] (if (< a 100) (+ a v) a))
doesn't stop the loop, it is being applied to every element.
Executed at the REPL
(reduce (fn [a v] (if (< a 100) (+ a v) a)) (range))
means we eargly wants to get and print the result, therefore the REPL hangs.
Related
problem formulation
Informally speaking, I want to write a function which, taking as input a function that generates binary factorizations and an element (usually neutral), creates an arbitrary length factorization generator. To be more specific, let us first define the function nfoldr in Clojure.
(defn nfoldr [f e]
(fn rec [n]
(fn [s]
(if (zero? n)
(if (empty? s) e)
(if (seq s)
(if-some [x ((rec (dec n)) (rest s))]
(f (list (first s) x))))))))
Here nil is used with the meaning "undefined output, input not in function's domain". Additionally, let us view the inverse relation of a function f as a set-valued function defining inv(f)(y) = {x | f(x) = y}.
I want to define a function nunfoldr such that inv(nfoldr(f , e)(n)) = nunfoldr(inv(f) , e)(n) when for every element y inv(f)(y) is finite, for each binary function f, element e and natural number n.
Moreover, I want the factorizations to be generated as lazily as possible, in a 2-dimensional sense of laziness. My goal is that, when getting some part of a factorization for the first time, there does not happen (much) computation needed for next parts or next factorizations. Similarly, when getting one factorization for the first time, there does not happen computation needed for next ones, whereas all the previous ones get in effect fully realized.
In an alternative formulation we can use the following longer version of nfoldr, which is equivalent to the shorter one when e is a neutral element.
(defn nfoldr [f e]
(fn [n]
(fn [s]
(if (zero? n)
(if (empty? s) e)
((fn rec [n]
(fn [s]
(if (= 1 n)
(if (and (seq s) (empty? (rest s))) (first s))
(if (seq s)
(if-some [x ((rec (dec n)) (rest s))]
(f (list (first s) x)))))))
n)))))
a special case
This problem is a generalization of the problem of generating partitions described in that question. Let us see how the old problem can be reduced to the current one. We have for every natural number n:
npt(n) = inv(nconcat(n)) = inv(nfoldr(concat2 , ())(n)) = nunfoldr(inv(concat2) , ())(n) = nunfoldr(pt2 , ())(n)
where:
npt(n) generates n-ary partitions
nconcat(n) computes n-ary concatenation
concat2 computes binary concatenation
pt2 generates binary partitions
So the following definitions give a solution to that problem.
(defn generate [step start]
(fn [x] (take-while some? (iterate step (start x)))))
(defn pt2-step [[x y]]
(if (seq y) (list (concat x (list (first y))) (rest y))))
(def pt2-start (partial list ()))
(def pt2 (generate pt2-step pt2-start))
(def npt (nunfoldr pt2 ()))
I will summarize my story of solving this problem, using the old one to create example runs, and conclude with some observations and proposals for extension.
solution 0
At first, I refined/generalized the approach I took for solving the old problem. Here I write my own versions of concat and map mainly for a better presentation and, in the case of concat, for some added laziness. Of course we can use Clojure's versions or mapcat instead.
(defn fproduct [f]
(fn [s]
(lazy-seq
(if (and (seq f) (seq s))
(cons
((first f) (first s))
((fproduct (rest f)) (rest s)))))))
(defn concat' [s]
(lazy-seq
(if (seq s)
(if-let [x (seq (first s))]
(cons (first x) (concat' (cons (rest x) (rest s))))
(concat' (rest s))))))
(defn map' [f]
(fn rec [s]
(lazy-seq
(if (seq s)
(cons (f (first s)) (rec (rest s)))))))
(defn nunfoldr [f e]
(fn rec [n]
(fn [x]
(if (zero? n)
(if (= e x) (list ()) ())
((comp
concat'
(map' (comp
(partial apply map)
(fproduct (list
(partial partial cons)
(rec (dec n))))))
f)
x)))))
In an attempt to get inner laziness we could replace (partial partial cons) with something like (comp (partial partial concat) list). Although this way we get inner LazySeqs, we do not gain any effective laziness because, before consing, most of the computation required for fully realizing the rest part takes place, something that seems unavoidable within this general approach. Based on the longer version of nfoldr, we can also define the following faster version.
(defn nunfoldr [f e]
(fn [n]
(fn [x]
(if (zero? n)
(if (= e x) (list ()) ())
(((fn rec [n]
(fn [x] (println \< x \>)
(if (= 1 n)
(list (list x))
((comp
concat'
(map' (comp
(partial apply map)
(fproduct (list
(partial partial cons)
(rec (dec n))))))
f)
x))))
n)
x)))))
Here I added a println call inside the main recursive function to get some visualization of eagerness. So let us demonstrate the outer laziness and inner eagerness.
user=> (first ((npt 5) (range 3)))
< (0 1 2) >
< (0 1 2) >
< (0 1 2) >
< (0 1 2) >
< (0 1 2) >
(() () () () (0 1 2))
user=> (ffirst ((npt 5) (range 3)))
< (0 1 2) >
< (0 1 2) >
< (0 1 2) >
< (0 1 2) >
< (0 1 2) >
()
solution 1
Then I thought of a more promising approach, using the function:
(defn transpose [s]
(lazy-seq
(if (every? seq s)
(cons
(map first s)
(transpose (map rest s))))))
To get the new solution we replace the previous argument in the map' call with:
(comp
(partial map (partial apply cons))
transpose
(fproduct (list
repeat
(rec (dec n)))))
Trying to get inner laziness we could replace (partial apply cons) with #(cons (first %) (lazy-seq (second %))) but this is not enough. The problem lies in the (every? seq s) test inside transpose, where checking a lazy sequence of factorizations for emptiness (as a stopping condition) results in realizing it.
solution 2
A first way to tackle the previous problem that came to my mind was to use some additional knowledge about the number of n-ary factorizations of an element. This way we can repeat a certain number of times and use only this sequence for the stopping condition of transpose. So we will replace the test inside transpose with (seq (first s)), add an input count to nunfoldr and replace the argument in the map' call with:
(comp
(partial map #(cons (first %) (lazy-seq (second %))))
transpose
(fproduct (list
(partial apply repeat)
(rec (dec n))))
(fn [[x y]] (list (list ((count (dec n)) y) x) y)))
Let us turn to the problem of partitions and define:
(defn npt-count [n]
(comp
(partial apply *)
#(map % (range 1 n))
(partial comp inc)
(partial partial /)
count))
(def npt (nunfoldr pt2 () npt-count))
Now we can demonstrate outer and inner laziness.
user=> (first ((npt 5) (range 3)))
< (0 1 2) >
(< (0 1 2) >
() < (0 1 2) >
() < (0 1 2) >
() < (0 1 2) >
() (0 1 2))
user=> (ffirst ((npt 5) (range 3)))
< (0 1 2) >
()
However, the dependence on additional knowledge and the extra computational cost make this solution unacceptable.
solution 3
Finally, I thought that in some crucial places I should use a kind of lazy sequences "with a non-lazy end", in order to be able to check for emptiness without realizing. An empty such sequence is just a non-lazy empty list and overall they behave somewhat like the lazy-conss of the early days of Clojure. Using the definitions given below we can reach an acceptable solution, which works under the assumption that always at least one of the concat'ed sequences (when there is one) is non-empty, something that holds in particular when every element has at least one binary factorization and we are using the longer version of nunfoldr.
(def lazy? (partial instance? clojure.lang.IPending))
(defn empty-eager? [x] (and (not (lazy? x)) (empty? x)))
(defn transpose [s]
(lazy-seq
(if-not (some empty-eager? s)
(cons
(map first s)
(transpose (map rest s))))))
(defn concat' [s]
(if-not (empty-eager? s)
(lazy-seq
(if-let [x (seq (first s))]
(cons (first x) (concat' (cons (rest x) (rest s))))
(concat' (rest s))))
()))
(defn map' [f]
(fn rec [s]
(if-not (empty-eager? s)
(lazy-seq (cons (f (first s)) (rec (rest s))))
())))
Note that in this approach the input function f should produce lazy sequences of the new kind and the resulting n-ary factorizer will also produce such sequences. To take care of the new input requirement, for the problem of partitions we define:
(defn pt2 [s]
(lazy-seq
(let [start (list () s)]
(cons
start
((fn rec [[x y]]
(if (seq y)
(lazy-seq
(let [step (list (concat x (list (first y))) (rest y))]
(cons step (rec step))))
()))
start)))))
Once again, let us demonstrate outer and inner laziness.
user=> (first ((npt 5) (range 3)))
< (0 1 2) >
< (0 1 2) >
(< (0 1 2) >
() < (0 1 2) >
() < (0 1 2) >
() () (0 1 2))
user=> (ffirst ((npt 5) (range 3)))
< (0 1 2) >
< (0 1 2) >
()
To make the input and output use standard lazy sequences (sacrificing a bit of laziness), we can add:
(defn lazy-end->eager-end [s]
(if (seq s)
(lazy-seq (cons (first s) (lazy-end->eager-end (rest s))))
()))
(defn eager-end->lazy-end [s]
(lazy-seq
(if-not (empty-eager? s)
(cons (first s) (eager-end->lazy-end (rest s))))))
(def nunfoldr
(comp
(partial comp (partial comp eager-end->lazy-end))
(partial apply nunfoldr)
(fproduct (list
(partial comp lazy-end->eager-end)
identity))
list))
observations and extensions
While creating solution 3, I observed that the old mechanism for lazy sequences in Clojure might not be necessarily inferior to the current one. With the transition, we gained the ability to create lazy sequences without any substantial computation taking place but lost the ability to check for emptiness without doing the computation needed to get one more element. Because both of these abilities can be important in some cases, it would be nice if a new mechanism was introduced, which would combine the advantages of the previous ones. Such a mechanism could use again an outer LazySeq thunk, which when forced would return an empty list or a Cons or another LazySeq or a new LazyCons thunk. This new thunk when forced would return a Cons or perhaps another LazyCons. Now empty? would force only LazySeq thunks while first and rest would also force LazyCons. In this setting map could look like this:
(defn map [f s]
(lazy-seq
(if (empty? s) ()
(lazy-cons
(cons (f (first s)) (map f (rest s)))))))
I have also noticed that the approach taken from solution 1 onwards lends itself to further generalization. If inside the argument in the map' call in the longer nunfoldr we replace cons with concat, transpose with some implementation of Cartesian product and repeat with another recursive call, we can then create versions that "split at different places". For example, using the following as the argument we can define a nunfoldm function that "splits in the middle" and corresponds to an easy-to-imagine nfoldm. Note that all "splitting strategies" are equivalent when f is associative.
(comp
(partial map (partial apply concat))
cproduct
(fproduct (let [n-half (quot n 2)]
(list (rec n-half) (rec (- n n-half))))))
Another natural modification would allow for infinite factorizations. To achieve this, if f generated infinite factorizations, nunfoldr(f , e)(n) should generate the factorizations in an order of type ω, so that each one of them could be produced in finite time.
Other possible extensions include dropping the n parameter, creating relational folds (in correspondence with the relational unfolds we consider here) and generically handling algebraic data structures other than sequences as input/output. This book, which I have just discovered, seems to contain valuable relevant information, given in a categorical/relational language.
Finally, to be able to do this kind of programming more conveniently, we could transfer it into a point-free, algebraic setting. This would require constructing considerable "extra machinery", in fact almost making a new language. This paper demonstrates such a language.
Given vector size n and window size k, how can I efficiently calculate the sliding window minimum in n log k time? ie, for vector [1 4 3 2 5 4 2] and window size 2, the output would be [1 3 2 2 4 2].
Obviously I can do it using partition and map but that that's n * k time.
I think I need to keep track of the minimum in a sorted map, and update the map when it's outside the window. But although I can get the min of a sorted map in log time, searching through the map to find any indexes that are expired is not log time.
Thanks.
You can solve this is with a priority queue based on Clojure's priority map data structure. We index the values in the window with their position in the vector.
The value of its first entry is the window minimum.
We add the new entry and get rid of the oldest one by key/vector-position.
A possible implementation is
(use [clojure.data.priority-map :only [priority-map]])
(defn windowed-min [k coll]
(let [numbered (map-indexed list coll)
[head tail] (split-at k numbered)
init-win (into (priority-map) head)
win-seq (reductions
(fn [w [i n]]
(-> w (dissoc (- i k)) (assoc i n)))
init-win
tail)]
(map (comp val first) win-seq)))
For example,
(windowed-min 2 [1 4 3 2 5 4 2])
=> (1 3 2 2 4 2)
The solution is developed lazily, so can be applied to an endless sequence.
After the initialisation, which is O(k), the function computes each element in the sequence in O(log k) time, as noted here.
You can solve in linear time --O(n), rather than O(n*log k)) as described by 1. http://articles.leetcode.com/sliding-window-maximum/ (easily change from find max to find min) and 2. https://people.cs.uct.ac.za/~ksmith/articles/sliding_window_minimum.html
The approaches needs a double ended queue to manage previous values which uses O(1) time for most queue operations (i.e. push/pop/peek, etc.) rather than O(log K) when using Priority Queue (i.e. Priority Map). I used a double ended queue from https://github.com/pjstadig/deque-clojure
Main Code to implement code in 1st reference above (for min rather than max):
(defn windowed-min-queue [w a]
(let [
deque-init (fn deque-init [] (reduce (fn [dq i]
(dq-push-back i (prune-back a i dq)))
empty-deque (range w)))
process-min (fn process-min [dq] (reductions (fn [q i]
(->> q
(prune-back a i)
(prune-front i w)
(dq-push-back i)))
dq (range w (count a))))
init (deque-init)
result (process-min init)] ;(process-min init)]
(map #(nth a (dq-front %)) result)))
Comparing the speed of this method to the other solution that uses a Priority Map we have (note: I liked the other solution since as well since its simpler).
; Test using Random arrays of data
(def N 1000000)
(def a (into [] (take N (repeatedly #(rand-int 50)))))
(def b (into [] (take N (repeatedly #(rand-int 50)))))
(def w 1024)
; Solution based upon Priority Map (see other solution which is also great since its simpler)
(time (doall (windowed-min-queue w a)))
;=> "Elapsed time: 1820.526521 msecs"
; Solution based upon double-ended queue
(time (doall (windowed-min w b)))
;=> "Elapsed time: 8290.671121 msecs"
Which is over a 4x faster, which is great considering the PriorityMap is written in Java while the double-ended queue code is pure Clojure (see https://github.com/pjstadig/deque-clojure)
Including the other wrappers/utilities used on the double-ended queue for reference.
(defn dq-push-front [e dq]
(conj dq e))
(defn dq-push-back [e dq]
(proto/inject dq e))
(defn dq-front [dq]
(first dq))
(defn dq-pop-front [dq]
(pop dq))
(defn dq-pop-back [dq]
(proto/eject dq))
(defn deque-empty? [dq]
(identical? empty-deque dq))
(defn dq-back [dq]
(proto/last dq))
(defn dq-front [dq]
(first dq))
(defn prune-back [a i dq]
(cond
(deque-empty? dq) dq
(< (nth a i) (nth a (dq-back dq))) (recur a i (dq-pop-back dq))
:else dq))
(defn prune-front [i w dq]
(cond
(deque-empty? dq) dq
(<= (dq-front dq) (- i w)) (recur i w (dq-pop-front dq))
:else dq))
My solution uses two auxillary maps to achieve fast performance. I map the keys to their values and also store the values to their occurrences in a sorted map. Upon each move of the window, I update the maps, and get the minimum of the sorted map, all in log time.
The downside is the code is a lot uglier, not lazy, and not idiomatic. The upside is that it outperforms the priority-map solution by about 2x. I think a lot of that though, can be blamed on the laziness of the solution above.
(defn- init-aux-maps [w v]
(let [sv (subvec v 0 w)
km (->> sv (map-indexed vector) (into (sorted-map)))
vm (->> sv frequencies (into (sorted-map)))]
[km vm]))
(defn- update-aux-maps [[km vm] j x]
(let [[ai av] (first km)
km (-> km (dissoc ai) (assoc j x))
vm (if (= (vm av) 1) (dissoc vm av) (update vm av dec))
vm (if (nil? (get vm x)) (assoc vm x 1) (update vm x inc))]
[km vm]))
(defn- get-minimum [[_ vm]] (ffirst vm))
(defn sliding-minimum [w v]
(loop [i 0, j w, am (init-aux-maps w v), acc []]
(let [acc (conj acc (get-minimum am))]
(if (< j (count v))
(recur (inc i) (inc j) (update-aux-maps am j (v j)) acc)
acc))))
I am coming from a Java background trying to learn Clojure. As the best way of learning is by actually writing some code, I took a very simple example of finding even numbers in a vector. Below is the piece of code I wrote:
`
(defn even-vector-2 [input]
(def output [])
(loop [x input]
(if (not= (count x) 0)
(do
(if (= (mod (first x) 2) 0)
(do
(def output (conj output (first x)))))
(recur (rest x)))))
output)
`
This code works, but it is lame that I had to use a global symbol to make it work. The reason I had to use the global symbol is because I wanted to change the state of the symbol every time I find an even number in the vector. let doesn't allow me to change the value of the symbol. Is there a way this can be achieved without using global symbols / atoms.
The idiomatic solution is straightfoward:
(filter even? [1 2 3])
; -> (2)
For your educational purposes an implementation with loop/recur
(defn filter-even [v]
(loop [r []
[x & xs :as v] v]
(if (seq v) ;; if current v is not empty
(if (even? x)
(recur (conj r x) xs) ;; bind r to r with x, bind v to rest
(recur r xs)) ;; leave r as is
r))) ;; terminate by not calling recur, return r
The main problem with your code is you're polluting the namespace by using def. You should never really use def inside a function. If you absolutely need mutability, use an atom or similar object.
Now, for your question. If you want to do this the "hard way", just make output a part of the loop:
(defn even-vector-3 [input]
(loop [[n & rest-input] input ; Deconstruct the head from the tail
output []] ; Output is just looped with the input
(if n ; n will be nil if the list is empty
(recur rest-input
(if (= (mod n 2) 0)
(conj output n)
output)) ; Adding nothing since the number is odd
output)))
Rarely is explicit looping necessary though. This is a typical case for a fold: you want to accumulate a list that's a variable-length version of another list. This is a quick version:
(defn even-vector-4 [input]
(reduce ; Reducing the input into another list
(fn [acc n]
(if (= (rem n 2) 0)
(conj acc n)
acc))
[] ; This is the initial accumulator.
input))
Really though, you're just filtering a list. Just use the core's filter:
(filter #(= (rem % 2) 0) [1 2 3 4])
Note, filter is lazy.
Try
#(filterv even? %)
if you want to return a vector or
#(filter even? %)
if you want a lazy sequence.
If you want to combine this with more transformations, you might want to go for a transducer:
(filter even?)
If you wanted to write it using loop/recur, I'd do it like this:
(defn keep-even
"Accepts a vector of numbers, returning a vector of the even ones."
[input]
(loop [result []
unused input]
(if (empty? unused)
result
(let [curr-value (first unused)
next-result (if (is-even? curr-value)
(conj result curr-value)
result)
next-unused (rest unused) ]
(recur next-result next-unused)))))
This gets the same result as the built-in filter function.
Take a look at filter, even? and vec
check out http://cljs.info/cheatsheet/
(defn even-vector-2 [input](vec(filter even? input)))
If you want a lazy solution, filter is your friend.
Here is a non-lazy simple solution (loop/recur can be avoided if you apply always the same function without precise work) :
(defn keep-even-numbers
[coll]
(reduce
(fn [agg nb]
(if (zero? (rem nb 2)) (conj agg nb) agg))
[] coll))
If you like mutability for "fun", here is a solution with temporary mutable collection :
(defn mkeep-even-numbers
[coll]
(persistent!
(reduce
(fn [agg nb]
(if (zero? (rem nb 2)) (conj! agg nb) agg))
(transient []) coll)))
...which is slightly faster !
mod would be better than rem if you extend the odd/even definition to negative integers
You can also replace [] by the collection you want, here a vector !
In Clojure, you generally don't need to write a low-level loop with loop/recur. Here is a quick demo.
(ns tst.clj.core
(:require
[tupelo.core :as t] ))
(t/refer-tupelo)
(defn is-even?
"Returns true if x is even, otherwise false."
[x]
(zero? (mod x 2)))
; quick sanity checks
(spyx (is-even? 2))
(spyx (is-even? 3))
(defn keep-even
"Accepts a vector of numbers, returning a vector of the even ones."
[input]
(into [] ; forces result into vector, eagerly
(filter is-even? input)))
; demonstrate on [0 1 2...9]
(spyx (keep-even (range 10)))
with result:
(is-even? 2) => true
(is-even? 3) => false
(keep-even (range 10)) => [0 2 4 6 8]
Your project.clj needs the following for spyx to work:
:dependencies [
[tupelo "0.9.11"]
In the answer to this question the responder uses the function reduced
(defn state [t]
(reduce (fn [[s1 t1] [s2 t2]]
(if (>= t1 t) (**reduced** s1) [s2 (+ t1 t2)]))
(thomsons-lamp)))
I looked at the doc and source and can't fully grok it.
(defn reduced
"Wraps x in a way such that a reduce will terminate with the value x"
{:added "1.5"}
[x]
(clojure.lang.Reduced. x))
In the example above I think (reduced s1) is supposed to end the reduction and return s1.
Is using reduce + reduced equivalent to hypothetical reduce-while or reduce-until functions if either existed?
Reduced provides a way to break out of a reduce with the value provided.
For example to add the numbers in a sequence
(reduce (fn [acc x] (+ acc x)) (range 10))
returns 45
(reduce (fn [acc x] (if (> acc 20) (reduced "I have seen enough") (+ acc x))) (range 10))
returns "I have seen enough"
What if map and doseq had a baby? I'm trying to write a function or macro like Common Lisp's mapc, but in Clojure. This does essentially what map does, but only for side-effects, so it doesn't need to generate a sequence of results, and wouldn't be lazy. I know that one can iterate over a single sequence using doseq, but map can iterate over multiple sequences, applying a function to each element in turn of all of the sequences. I also know that one can wrap map in dorun. (Note: This question has been extensively edited after many comments and a very thorough answer. The original question focused on macros, but those macro issues turned out to be peripheral.)
This is fast (according to criterium):
(defn domap2
[f coll]
(dotimes [i (count coll)]
(f (nth coll i))))
but it only accepts one collection. This accepts arbitrary collections:
(defn domap3
[f & colls]
(dotimes [i (apply min (map count colls))]
(apply f (map #(nth % i) colls))))
but it's very slow by comparison. I could also write a version like the first, but with different parameter cases [f c1 c2], [f c1 c2 c3], etc., but in the end, I'll need a case that handles arbitrary numbers of collections, like the last example, which is simpler anyway. I've tried many other solutions as well.
Since the second example is very much like the first except for the use of apply and the map inside the loop, I suspect that getting rid of them would speed things up a lot. I have tried to do this by writing domap2 as a macro, but the way that the catch-all variable after & is handled keeps tripping me up, as illustrated above.
Other examples (out of 15 or 20 different versions), benchmark code, and times on a Macbook Pro that's a few years old (full source here):
(defn domap1
[f coll]
(doseq [e coll]
(f e)))
(defn domap7
[f coll]
(dorun (map f coll)))
(defn domap18
[f & colls]
(dorun (apply map f colls)))
(defn domap15
[f coll]
(when (seq coll)
(f (first coll))
(recur f (rest coll))))
(defn domap17
[f & colls]
(let [argvecs (apply (partial map vector) colls)] ; seq of ntuples of interleaved vals
(doseq [args argvecs]
(apply f args))))
I'm working on an application that uses core.matrix matrices and vectors, but feel free to substitute your own side-effecting functions below.
(ns tst
(:use criterium.core
[clojure.core.matrix :as mx]))
(def howmany 1000)
(def a-coll (vec (range howmany)))
(def maskvec (zero-vector :vectorz howmany))
(defn unmaskit!
[idx]
(mx/mset! maskvec idx 1.0)) ; sets element idx of maskvec to 1.0
(defn runbench
[domapfn label]
(print (str "\n" label ":\n"))
(bench (def _ (domapfn unmaskit! a-coll))))
Mean execution times according to Criterium, in microseconds:
domap1: 12.317551 [doseq]
domap2: 19.065317 [dotimes]
domap3: 265.983779 [dotimes with apply, map]
domap7: 53.263230 [map with dorun]
domap18: 54.456801 [map with dorun, multiple collections]
domap15: 32.034993 [recur]
domap17: 95.259984 [doseq, multiple collections interleaved using map]
EDIT: It may be that dorun+map is the best way to implement domap for multiple large lazy sequence arguments, but doseq is still king when it comes to single lazy sequences. Performing the same operation as unmask! above, but running the index through (mod idx 1000), and iterating over (range 100000000), doseq is about twice as fast as dorun+map in my tests (i.e. (def domap25 (comp dorun map))).
You don't need a macro, and I don't see why a macro would be helpful here.
user> (defn do-map [f & lists] (apply mapv f lists) nil)
#'user/do-map
user> (do-map (comp println +) (range 2 6) (range 8 11) (range 22 40))
32
35
38
nil
note do-map here is eager (thanks to mapv) and only executes for side effects
Macros can use varargs lists, as the (useless!) macro version of do-map demonstrates:
user> (defmacro do-map-macro [f & lists] `(do (mapv ~f ~#lists) nil))
#'user/do-map-macro
user> (do-map-macro (comp println +) (range 2 6) (range 8 11) (range 22 40))
32
35
38
nil
user> (macroexpand-1 '(do-map-macro (comp println +) (range 2 6) (range 8 11) (range 22 40)))
(do (clojure.core/mapv (comp println +) (range 2 6) (range 8 11) (range 22 40)) nil)
Addendum:
addressing the efficiency / garbage-creation concerns:
note that below I truncate the output of the criterium bench function, for conciseness reasons:
(defn do-map-loop
[f & lists]
(loop [heads lists]
(when (every? seq heads)
(apply f (map first heads))
(recur (map rest heads)))))
user> (crit/bench (with-out-str (do-map-loop (comp println +) (range 2 6) (range 8 11) (range 22 40))))
...
Execution time mean : 11.367804 µs
...
This looks promising because it doesn't create a data structure that we aren't using anyway (unlike mapv above). But it turns out it is slower than the previous (maybe because of the two map calls?).
user> (crit/bench (with-out-str (do-map-macro (comp println +) (range 2 6) (range 8 11) (range 22 40))))
...
Execution time mean : 7.427182 µs
...
user> (crit/bench (with-out-str (do-map (comp println +) (range 2 6) (range 8 11) (range 22 40))))
...
Execution time mean : 8.355587 µs
...
Since the loop still wasn't faster, let's try a version which specializes on arity, so that we don't need to call map twice on every iteration:
(defn do-map-loop-3
[f a b c]
(loop [[a & as] a
[b & bs] b
[c & cs] c]
(when (and a b c)
(f a b c)
(recur as bs cs))))
Remarkably, though this is faster, it is still slower than the version that just used mapv:
user> (crit/bench (with-out-str (do-map-loop-3 (comp println +) (range 2 6) (range 8 11) (range 22 40))))
...
Execution time mean : 9.450108 µs
...
Next I wondered if the size of the input was a factor. With larger inputs...
user> (def test-input (repeatedly 3 #(range (rand-int 100) (rand-int 1000))))
#'user/test-input
user> (map count test-input)
(475 531 511)
user> (crit/bench (with-out-str (apply do-map-loop-3 (comp println +) test-input)))
...
Execution time mean : 1.005073 ms
...
user> (crit/bench (with-out-str (apply do-map (comp println +) test-input)))
...
Execution time mean : 756.955238 µs
...
Finally, for completeness, the timing of do-map-loop (which as expected is slightly slower than do-map-loop-3)
user> (crit/bench (with-out-str (apply do-map-loop (comp println +) test-input)))
...
Execution time mean : 1.553932 ms
As we see, even with larger input sizes, mapv is faster.
(I should note for completeness here that map is slightly faster than mapv, but not by a large degree).