I looked at maps source code which basically keeps creating lazy sequences. I would think that iterating over a collection and adding to a transient vector would be faster, but clearly it isn't. What don't I understand about clojures performance behavior?
;=> (time (do-with / (range 1 1000) (range 1 1000)))
;"Elapsed time: 23.1808 msecs"
;
; vs
;=> (time (doall (map #(/ %1 %2) (range 1 1000) (range 1 1000))))
;"Elapsed time: 2.604174 msecs"
(defn do-with
[fn coll1 coll2]
(let [end (count coll1)]
(loop [i 0
res (transient [])]
(if
(= i end)
(persistent! res)
(let [x (nth coll1 i)
y (nth coll2 i)
r (fn x y)]
(recur (inc i) (conj! res r)))
))))
In order of conjectured impact on relative results:
Your do-with function uses nth to access the individual items in the input collections. nth operates in linear time on ranges, making do-with quadratic. Needless to say, this will kill performance on large collections.
range produces chunked seqs and map handles those extremely efficiently. (Essentially it produces chunks of up to 32 elements -- here it will in fact be exactly 32 -- by running a tight loop over the internal array of each input chunk in turn, placing results in internal arrays of output chunks.)
Benchmarking with time doesn't give you steady state performance. (Which is why one should really use a proper benchmarking library; in the case of Clojure, Criterium is the standard solution.)
Incidentally, (map #(/ %1 %2) xs ys) can simply be written as (map / xs ys).
Update:
I've benchmarked the map version, the original do-with and a new do-with version with Criterium, using (range 1 1000) as both inputs in each case (as in the question text), obtaining the following mean execution times:
;;; (range 1 1000)
new do-with 170.383334 µs
(doall (map ...)) 230.756753 µs
original do-with 15.624444 ms
Additionally, I've repeated the benchmark using a vector stored in a Var as input rather than ranges (that is, with (def r (vec (range 1 1000))) at the start and using r as both collection arguments in each benchmark). Unsurprisingly, the original do-with came in first -- nth is very fast on vectors (plus using nth with a vector avoids all the intermediate allocations involved in seq traversal).
;;; (vec (range 1 1000))
original do-with 73.975419 µs
new do-with 87.399952 µs
(doall (map ...)) 153.493128 µs
Here's the new do-with with linear time complexity:
(defn do-with [f xs ys]
(loop [xs (seq xs)
ys (seq ys)
ret (transient [])]
(if (and xs ys)
(recur (next xs)
(next ys)
(conj! ret (f (first xs) (first ys))))
(persistent! ret))))
Related
If I have a vector (def v [1 2 3]), I can replace the first element with (assoc v 0 666), obtaining [666 2 3]
But if I try to do the same after mapping over the vector:
(def v (map inc [1 2 3]))
(assoc v 0 666)
the following exception is thrown:
ClassCastException clojure.lang.LazySeq cannot be cast to clojure.lang.Associative
What's the most idiomatic way of editing or updating a single element of a lazy sequence?
Should I use map-indexed and alter only the index 0 or realize the lazy sequence into a vector and then edit it via assoc/update?
The first has the advantage of maintaining the laziness, while the second is less efficient but maybe more obvious.
I guess for the first element I can also use drop and cons.
Are there any other ways? I was not able to find any examples anywhere.
What's the most idiomatic way of editing or updating a single element of a lazy sequence?
There's no built-in function for modifying a single element of a sequence/list, but map-indexed is probably the closest thing. It's not an efficient operation for lists. Assuming you don't need laziness, I'd pour the sequence into a vector, which is what mapv does i.e. (into [] (map f coll)). Depending on how you use your modified sequence, it may be just as performant to vectorize it and modify.
You could write a function using map-indexed to do something similar and lazy:
(defn assoc-seq [s i v]
(map-indexed (fn [j x] (if (= i j) v x)) s))
Or if you want to do this work in one pass lazily without vector-izing, you can also use a transducer:
(sequence
(comp
(map inc)
(map-indexed (fn [j x] (if (= 0 j) 666 x))))
[1 2 3])
Realizing your use case is to only modify the first item in a lazy sequence, then you can do something simpler while preserving laziness:
(concat [666] (rest s))
Update re: comment on optimization: leetwinski's assoc-at function is ~8ms faster when updating the 500,000th element in a 1,000,000 element lazy sequence, so you should use his answer if you're looking to squeeze every bit of performance out of an inherently inefficient operation:
(def big-lazy (range 1e6))
(crit/bench
(last (assoc-at big-lazy 500000 666)))
Evaluation count : 1080 in 60 samples of 18 calls.
Execution time mean : 51.567317 ms
Execution time std-deviation : 4.947684 ms
Execution time lower quantile : 47.038877 ms ( 2.5%)
Execution time upper quantile : 65.604790 ms (97.5%)
Overhead used : 1.662189 ns
Found 6 outliers in 60 samples (10.0000 %)
low-severe 4 (6.6667 %)
low-mild 2 (3.3333 %)
Variance from outliers : 68.6139 % Variance is severely inflated by outliers
=> nil
(crit/bench
(last (assoc-seq big-lazy 500000 666)))
Evaluation count : 1140 in 60 samples of 19 calls.
Execution time mean : 59.553335 ms
Execution time std-deviation : 4.507430 ms
Execution time lower quantile : 54.450115 ms ( 2.5%)
Execution time upper quantile : 69.288104 ms (97.5%)
Overhead used : 1.662189 ns
Found 4 outliers in 60 samples (6.6667 %)
low-severe 4 (6.6667 %)
Variance from outliers : 56.7865 % Variance is severely inflated by outliers
=> nil
The assoc-at version is 2-3x faster when updating the first item in a large lazy sequence, but it's no faster than (last (concat [666] (rest big-lazy))).
i would probably go with something generic like this, if this functionality is really needed (which i strongly doubt about):
(defn assoc-at [data i item]
(if (associative? data)
(assoc data i item)
(if-not (neg? i)
(letfn [(assoc-lazy [i data]
(cond (zero? i) (cons item (rest data))
(empty? data) data
:else (lazy-seq (cons (first data)
(assoc-lazy (dec i) (rest data))))))]
(assoc-lazy i data))
data)))
user> (assoc-at {:a 10} :b 20)
;; {:a 10, :b 20}
user> (assoc-at [1 2 3 4] 3 101)
;; [1 2 3 101]
user> (assoc-at (map inc [1 2 3 4]) 2 123)
;; (2 3 123 5)
another way is to use split-at:
(defn assoc-at [data i item]
(if (neg? i)
data
(let [[l r] (split-at i data)]
(if (seq r)
(concat l [item] (rest r))
data))))
notice that both this functions short circuit the coll traversal, which mapping approach doesn't. Here some quick and dirty benchmark:
(defn massoc-at [data i item]
(if (neg? i)
data
(map-indexed (fn [j x] (if (== i j) item x)) data)))
(time (last (assoc-at (range 10000000) 0 1000)))
;;=> "Elapsed time: 747.921032 msecs"
9999999
(time (last (massoc-at (range 10000000) 0 1000)))
;;=> "Elapsed time: 1525.446511 msecs"
9999999
Given vector size n and window size k, how can I efficiently calculate the sliding window minimum in n log k time? ie, for vector [1 4 3 2 5 4 2] and window size 2, the output would be [1 3 2 2 4 2].
Obviously I can do it using partition and map but that that's n * k time.
I think I need to keep track of the minimum in a sorted map, and update the map when it's outside the window. But although I can get the min of a sorted map in log time, searching through the map to find any indexes that are expired is not log time.
Thanks.
You can solve this is with a priority queue based on Clojure's priority map data structure. We index the values in the window with their position in the vector.
The value of its first entry is the window minimum.
We add the new entry and get rid of the oldest one by key/vector-position.
A possible implementation is
(use [clojure.data.priority-map :only [priority-map]])
(defn windowed-min [k coll]
(let [numbered (map-indexed list coll)
[head tail] (split-at k numbered)
init-win (into (priority-map) head)
win-seq (reductions
(fn [w [i n]]
(-> w (dissoc (- i k)) (assoc i n)))
init-win
tail)]
(map (comp val first) win-seq)))
For example,
(windowed-min 2 [1 4 3 2 5 4 2])
=> (1 3 2 2 4 2)
The solution is developed lazily, so can be applied to an endless sequence.
After the initialisation, which is O(k), the function computes each element in the sequence in O(log k) time, as noted here.
You can solve in linear time --O(n), rather than O(n*log k)) as described by 1. http://articles.leetcode.com/sliding-window-maximum/ (easily change from find max to find min) and 2. https://people.cs.uct.ac.za/~ksmith/articles/sliding_window_minimum.html
The approaches needs a double ended queue to manage previous values which uses O(1) time for most queue operations (i.e. push/pop/peek, etc.) rather than O(log K) when using Priority Queue (i.e. Priority Map). I used a double ended queue from https://github.com/pjstadig/deque-clojure
Main Code to implement code in 1st reference above (for min rather than max):
(defn windowed-min-queue [w a]
(let [
deque-init (fn deque-init [] (reduce (fn [dq i]
(dq-push-back i (prune-back a i dq)))
empty-deque (range w)))
process-min (fn process-min [dq] (reductions (fn [q i]
(->> q
(prune-back a i)
(prune-front i w)
(dq-push-back i)))
dq (range w (count a))))
init (deque-init)
result (process-min init)] ;(process-min init)]
(map #(nth a (dq-front %)) result)))
Comparing the speed of this method to the other solution that uses a Priority Map we have (note: I liked the other solution since as well since its simpler).
; Test using Random arrays of data
(def N 1000000)
(def a (into [] (take N (repeatedly #(rand-int 50)))))
(def b (into [] (take N (repeatedly #(rand-int 50)))))
(def w 1024)
; Solution based upon Priority Map (see other solution which is also great since its simpler)
(time (doall (windowed-min-queue w a)))
;=> "Elapsed time: 1820.526521 msecs"
; Solution based upon double-ended queue
(time (doall (windowed-min w b)))
;=> "Elapsed time: 8290.671121 msecs"
Which is over a 4x faster, which is great considering the PriorityMap is written in Java while the double-ended queue code is pure Clojure (see https://github.com/pjstadig/deque-clojure)
Including the other wrappers/utilities used on the double-ended queue for reference.
(defn dq-push-front [e dq]
(conj dq e))
(defn dq-push-back [e dq]
(proto/inject dq e))
(defn dq-front [dq]
(first dq))
(defn dq-pop-front [dq]
(pop dq))
(defn dq-pop-back [dq]
(proto/eject dq))
(defn deque-empty? [dq]
(identical? empty-deque dq))
(defn dq-back [dq]
(proto/last dq))
(defn dq-front [dq]
(first dq))
(defn prune-back [a i dq]
(cond
(deque-empty? dq) dq
(< (nth a i) (nth a (dq-back dq))) (recur a i (dq-pop-back dq))
:else dq))
(defn prune-front [i w dq]
(cond
(deque-empty? dq) dq
(<= (dq-front dq) (- i w)) (recur i w (dq-pop-front dq))
:else dq))
My solution uses two auxillary maps to achieve fast performance. I map the keys to their values and also store the values to their occurrences in a sorted map. Upon each move of the window, I update the maps, and get the minimum of the sorted map, all in log time.
The downside is the code is a lot uglier, not lazy, and not idiomatic. The upside is that it outperforms the priority-map solution by about 2x. I think a lot of that though, can be blamed on the laziness of the solution above.
(defn- init-aux-maps [w v]
(let [sv (subvec v 0 w)
km (->> sv (map-indexed vector) (into (sorted-map)))
vm (->> sv frequencies (into (sorted-map)))]
[km vm]))
(defn- update-aux-maps [[km vm] j x]
(let [[ai av] (first km)
km (-> km (dissoc ai) (assoc j x))
vm (if (= (vm av) 1) (dissoc vm av) (update vm av dec))
vm (if (nil? (get vm x)) (assoc vm x 1) (update vm x inc))]
[km vm]))
(defn- get-minimum [[_ vm]] (ffirst vm))
(defn sliding-minimum [w v]
(loop [i 0, j w, am (init-aux-maps w v), acc []]
(let [acc (conj acc (get-minimum am))]
(if (< j (count v))
(recur (inc i) (inc j) (update-aux-maps am j (v j)) acc)
acc))))
I am attempting to copy about 12 million documents in an AWS S3 bucket to give them new names. The names previously had a prefix and will now all be document name only. So a/b/123 once renamed will be 123. The last segment is a uuid so there will not be any naming collisions.
This process has been partially completed so some have been copied and some still need to be. I have a text file that contains all of the document names. I would like an efficient way to determine which documents have not yet been moved.
I have some naive code that shows what I would like to accomplish.
(def doc-names ["o/123" "o/234" "t/543" "t/678" "123" "234" "678"])
(defn still-need-copied [doc-names]
(let [last-segment (fn [doc-name]
(last (clojure.string/split doc-name #"/")))
by-position (group-by #(.contains % "/") doc-names)
top (set (get by-position false))
nested (set (map #(last-segment %) (get by-position true)))
needs-copied (clojure.set/difference nested top)]
(filter #(contains? needs-copied (last-segment %)) doc-names)))
I would propose this solution:
(defn still-need-copied [doc-names]
(->> doc-names
(group-by #(last (clojure.string/split % #"/")))
(keep #(when (== 1 (count (val %))) (first (val %))))))
first you group all the items by the last element split string, getting this for your input:
{"123" ["o/123" "123"],
"234" ["o/234" "234"],
"543" ["t/543"],
"678" ["t/678" "678"]}
and then you just need to select all the values of a map, having length of 1, and to take their first elements.
I would say it is way more readable than your variant, and also seems to be more productive.
That's why:
as far as I can understand, your code here probably has a complexity of
N (grouping to a map with just 2 keys) +
Nlog(N) (creation and filling of top set) +
Nlog(N) (creation and filling of nested set) +
Nlog(N) (sets difference) +
Nlog(N) (filtering + searching each element in a needs-copied set) =
4Nlog(N) + N
whereas my variant would probably have the complexity of
Nlog(N) (grouping values into a map with a large amount of keys) +
N (keeping needed values) =
N + Nlog(N)
And though asymptotically they are both O(Nlog(N)), practically mine will probably complete faster.
ps: Not an expert in the complexity theory. Just made some very rough estimation
here is a little test:
(defn generate-data [len]
(doall (mapcat
#(let [n (rand-int 2)]
(if (zero? n)
[(str "aaa/" %) (str %)]
[(str %)]))
(range len))))
(defn still-need-copied [doc-names]
(let [last-segment (fn [doc-name]
(last (clojure.string/split doc-name #"/")))
by-position (group-by #(.contains % "/") doc-names)
top (set (get by-position false))
nested (set (map #(last-segment %) (get by-position true)))
needs-copied (clojure.set/difference nested top)]
(filter #(contains? needs-copied (last-segment %)) doc-names)))
(defn still-need-copied-2 [doc-names]
(->> doc-names
(group-by #(last (clojure.string/split % #"/")))
(keep #(when (== 1 (count (val %))) (first (val %))))))
(def data-100k (generate-data 100000))
(def data-1m (generate-data 1000000))
user> (let [_ (time (dorun (still-need-copied data-100k)))
_ (time (dorun (still-need-copied-2 data-100k)))
_ (time (dorun (still-need-copied data-1m)))
_ (time (dorun (still-need-copied-2 data-1m)))])
"Elapsed time: 714.929641 msecs"
"Elapsed time: 243.918466 msecs"
"Elapsed time: 7094.333425 msecs"
"Elapsed time: 2329.75247 msecs"
so it is ~3 times faster, just as I predicted
update:
found one solution, which is not so elegant, but seems to be working.
You said you're using iota, so i've generated a huge file with the lines of ~15 millions of lines (with forementioned generate-data fn)
then i've decided to sort if by the last part after slash (so that "123" and "aaa/123" stand together.
(defn last-part [s] (last (clojure.string/split s #"/")))
(def sorted (sort-by last-part (iota/seq "my/file/path")))
it has completed surprisingly fast. So the last thing i had to do, is to make a simple loop checking for every item if there is an item with the same last part nearby:
(def res (loop [res [] [item1 & [item2 & rest :as tail] :as coll] sorted]
(cond (empty? coll) res
(empty? tail) (conj res item1)
(= (last-part item1) (last-part item2)) (recur res rest)
:else (recur (conj res item1) tail))))
it has also completed without any visible difficulties, so i've got the needed result without any map/reduce framework.
I think also, that if you won't keep the sorted coll in a var, you would probably save memory by avoiding the huge coll head retention:
(def res (loop [res []
[item1 & [item2 & rest :as tail] :as coll] (sort-by last-part (iota/seq "my/file/path"))]
(cond (empty? coll) res
(empty? tail) (conj res item1)
(= (last-part item1) (last-part item2)) (recur res rest)
:else (recur (conj res item1) tail))))
What if map and doseq had a baby? I'm trying to write a function or macro like Common Lisp's mapc, but in Clojure. This does essentially what map does, but only for side-effects, so it doesn't need to generate a sequence of results, and wouldn't be lazy. I know that one can iterate over a single sequence using doseq, but map can iterate over multiple sequences, applying a function to each element in turn of all of the sequences. I also know that one can wrap map in dorun. (Note: This question has been extensively edited after many comments and a very thorough answer. The original question focused on macros, but those macro issues turned out to be peripheral.)
This is fast (according to criterium):
(defn domap2
[f coll]
(dotimes [i (count coll)]
(f (nth coll i))))
but it only accepts one collection. This accepts arbitrary collections:
(defn domap3
[f & colls]
(dotimes [i (apply min (map count colls))]
(apply f (map #(nth % i) colls))))
but it's very slow by comparison. I could also write a version like the first, but with different parameter cases [f c1 c2], [f c1 c2 c3], etc., but in the end, I'll need a case that handles arbitrary numbers of collections, like the last example, which is simpler anyway. I've tried many other solutions as well.
Since the second example is very much like the first except for the use of apply and the map inside the loop, I suspect that getting rid of them would speed things up a lot. I have tried to do this by writing domap2 as a macro, but the way that the catch-all variable after & is handled keeps tripping me up, as illustrated above.
Other examples (out of 15 or 20 different versions), benchmark code, and times on a Macbook Pro that's a few years old (full source here):
(defn domap1
[f coll]
(doseq [e coll]
(f e)))
(defn domap7
[f coll]
(dorun (map f coll)))
(defn domap18
[f & colls]
(dorun (apply map f colls)))
(defn domap15
[f coll]
(when (seq coll)
(f (first coll))
(recur f (rest coll))))
(defn domap17
[f & colls]
(let [argvecs (apply (partial map vector) colls)] ; seq of ntuples of interleaved vals
(doseq [args argvecs]
(apply f args))))
I'm working on an application that uses core.matrix matrices and vectors, but feel free to substitute your own side-effecting functions below.
(ns tst
(:use criterium.core
[clojure.core.matrix :as mx]))
(def howmany 1000)
(def a-coll (vec (range howmany)))
(def maskvec (zero-vector :vectorz howmany))
(defn unmaskit!
[idx]
(mx/mset! maskvec idx 1.0)) ; sets element idx of maskvec to 1.0
(defn runbench
[domapfn label]
(print (str "\n" label ":\n"))
(bench (def _ (domapfn unmaskit! a-coll))))
Mean execution times according to Criterium, in microseconds:
domap1: 12.317551 [doseq]
domap2: 19.065317 [dotimes]
domap3: 265.983779 [dotimes with apply, map]
domap7: 53.263230 [map with dorun]
domap18: 54.456801 [map with dorun, multiple collections]
domap15: 32.034993 [recur]
domap17: 95.259984 [doseq, multiple collections interleaved using map]
EDIT: It may be that dorun+map is the best way to implement domap for multiple large lazy sequence arguments, but doseq is still king when it comes to single lazy sequences. Performing the same operation as unmask! above, but running the index through (mod idx 1000), and iterating over (range 100000000), doseq is about twice as fast as dorun+map in my tests (i.e. (def domap25 (comp dorun map))).
You don't need a macro, and I don't see why a macro would be helpful here.
user> (defn do-map [f & lists] (apply mapv f lists) nil)
#'user/do-map
user> (do-map (comp println +) (range 2 6) (range 8 11) (range 22 40))
32
35
38
nil
note do-map here is eager (thanks to mapv) and only executes for side effects
Macros can use varargs lists, as the (useless!) macro version of do-map demonstrates:
user> (defmacro do-map-macro [f & lists] `(do (mapv ~f ~#lists) nil))
#'user/do-map-macro
user> (do-map-macro (comp println +) (range 2 6) (range 8 11) (range 22 40))
32
35
38
nil
user> (macroexpand-1 '(do-map-macro (comp println +) (range 2 6) (range 8 11) (range 22 40)))
(do (clojure.core/mapv (comp println +) (range 2 6) (range 8 11) (range 22 40)) nil)
Addendum:
addressing the efficiency / garbage-creation concerns:
note that below I truncate the output of the criterium bench function, for conciseness reasons:
(defn do-map-loop
[f & lists]
(loop [heads lists]
(when (every? seq heads)
(apply f (map first heads))
(recur (map rest heads)))))
user> (crit/bench (with-out-str (do-map-loop (comp println +) (range 2 6) (range 8 11) (range 22 40))))
...
Execution time mean : 11.367804 µs
...
This looks promising because it doesn't create a data structure that we aren't using anyway (unlike mapv above). But it turns out it is slower than the previous (maybe because of the two map calls?).
user> (crit/bench (with-out-str (do-map-macro (comp println +) (range 2 6) (range 8 11) (range 22 40))))
...
Execution time mean : 7.427182 µs
...
user> (crit/bench (with-out-str (do-map (comp println +) (range 2 6) (range 8 11) (range 22 40))))
...
Execution time mean : 8.355587 µs
...
Since the loop still wasn't faster, let's try a version which specializes on arity, so that we don't need to call map twice on every iteration:
(defn do-map-loop-3
[f a b c]
(loop [[a & as] a
[b & bs] b
[c & cs] c]
(when (and a b c)
(f a b c)
(recur as bs cs))))
Remarkably, though this is faster, it is still slower than the version that just used mapv:
user> (crit/bench (with-out-str (do-map-loop-3 (comp println +) (range 2 6) (range 8 11) (range 22 40))))
...
Execution time mean : 9.450108 µs
...
Next I wondered if the size of the input was a factor. With larger inputs...
user> (def test-input (repeatedly 3 #(range (rand-int 100) (rand-int 1000))))
#'user/test-input
user> (map count test-input)
(475 531 511)
user> (crit/bench (with-out-str (apply do-map-loop-3 (comp println +) test-input)))
...
Execution time mean : 1.005073 ms
...
user> (crit/bench (with-out-str (apply do-map (comp println +) test-input)))
...
Execution time mean : 756.955238 µs
...
Finally, for completeness, the timing of do-map-loop (which as expected is slightly slower than do-map-loop-3)
user> (crit/bench (with-out-str (apply do-map-loop (comp println +) test-input)))
...
Execution time mean : 1.553932 ms
As we see, even with larger input sizes, mapv is faster.
(I should note for completeness here that map is slightly faster than mapv, but not by a large degree).
I've been working on solving Project Euler problems in Clojure to get better, and I've already run into prime number generation a couple of times. My problem is that it is just taking way too long. I was hoping someone could help me find an efficient way to do this in a Clojure-y way.
When I fist did this, I brute-forced it. That was easy to do. But calculating 10001 prime numbers took 2 minutes this way on a Xeon 2.33GHz, too long for the rules, and too long in general. Here was the algorithm:
(defn next-prime-slow
"Find the next prime number, checking against our already existing list"
([sofar guess]
(if (not-any? #(zero? (mod guess %)) sofar)
guess ; Then we have a prime
(recur sofar (+ guess 2))))) ; Try again
(defn find-primes-slow
"Finds prime numbers, slowly"
([]
(find-primes-slow 10001 [2 3])) ; How many we need, initial prime seeds
([needed sofar]
(if (<= needed (count sofar))
sofar ; Found enough, we're done
(recur needed (concat sofar [(next-prime-slow sofar (last sofar))])))))
By replacing next-prime-slow with a newer routine that took some additional rules into account (like the 6n +/- 1 property) I was able to speed things up to about 70 seconds.
Next I tried making a sieve of Eratosthenes in pure Clojure. I don't think I got all the bugs out, but I gave up because it was simply way too slow (even worse than the above, I think).
(defn clean-sieve
"Clean the sieve of what we know isn't prime based"
[seeds-left sieve]
(if (zero? (count seeds-left))
sieve ; Nothing left to filter the list against
(recur
(rest seeds-left) ; The numbers we haven't checked against
(filter #(> (mod % (first seeds-left)) 0) sieve)))) ; Filter out multiples
(defn self-clean-sieve ; This seems to be REALLY slow
"Remove the stuff in the sieve that isn't prime based on it's self"
([sieve]
(self-clean-sieve (rest sieve) (take 1 sieve)))
([sieve clean]
(if (zero? (count sieve))
clean
(let [cleaned (filter #(> (mod % (last clean)) 0) sieve)]
(recur (rest cleaned) (into clean [(first cleaned)]))))))
(defn find-primes
"Finds prime numbers, hopefully faster"
([]
(find-primes 10001 [2]))
([needed seeds]
(if (>= (count seeds) needed)
seeds ; We have enough
(recur ; Recalculate
needed
(into
seeds ; Stuff we've already found
(let [start (last seeds)
end-range (+ start 150000)] ; NOTE HERE
(reverse
(self-clean-sieve
(clean-sieve seeds (range (inc start) end-range))))))))))
This is bad. It also causes stack overflows if the number 150000 is smaller. This despite the fact I'm using recur. That may be my fault.
Next I tried a sieve, using Java methods on a Java ArrayList. That took quite a bit of time, and memory.
My latest attempt is a sieve using a Clojure hash-map, inserting all the numbers in the sieve then dissoc'ing numbers that aren't prime. At the end, it takes the key list, which are the prime numbers it found. It takes about 10-12 seconds to find 10000 prime numbers. I'm not sure it's fully debugged yet. It's recursive too (using recur and loop), since I'm trying to be Lispy.
So with these kind of problems, problem 10 (sum up all primes under 2000000) is killing me. My fastest code came up with the right answer, but it took 105 seconds to do it, and needed quite a bit of memory (I gave it 512 MB just so I wouldn't have to fuss with it). My other algorithms take so long I always ended up stopping them first.
I could use a sieve to calculate that many primes in Java or C quite fast and without using so much memory. I know I must be missing something in my Clojure/Lisp style that's causing the problem.
Is there something I'm doing really wrong? Is Clojure just kinda slow with large sequences? Reading some of the project Euler discussions people have calculated the first 10000 primes in other Lisps in under 100 miliseconds. I realize the JVM may slow things down and Clojure is relatively young, but I wouldn't expect a 100x difference.
Can someone enlighten me on a fast way to calculate prime numbers in Clojure?
Here's another approach that celebrates Clojure's Java interop. This takes 374ms on a 2.4 Ghz Core 2 Duo (running single-threaded). I let the efficient Miller-Rabin implementation in Java's BigInteger#isProbablePrime deal with the primality check.
(def certainty 5)
(defn prime? [n]
(.isProbablePrime (BigInteger/valueOf n) certainty))
(concat [2] (take 10001
(filter prime?
(take-nth 2
(range 1 Integer/MAX_VALUE)))))
The Miller-Rabin certainty of 5 is probably not very good for numbers much larger than this. That certainty is equal to 96.875% certain it's prime (1 - .5^certainty)
I realize this is a very old question, but I recently ended up looking for the same and the links here weren't what I'm looking for (restricted to functional types as much as possible, lazily generating ~every~ prime I want).
I stumbled upon a nice F# implementation, so all credits are his. I merely ported it to Clojure:
(defn gen-primes "Generates an infinite, lazy sequence of prime numbers"
[]
(letfn [(reinsert [table x prime]
(update-in table [(+ prime x)] conj prime))
(primes-step [table d]
(if-let [factors (get table d)]
(recur (reduce #(reinsert %1 d %2) (dissoc table d) factors)
(inc d))
(lazy-seq (cons d (primes-step (assoc table (* d d) (list d))
(inc d))))))]
(primes-step {} 2)))
Usage is simply
(take 5 (gen-primes))
Very late to the party, but I'll throw in an example, using Java BitSets:
(defn sieve [n]
"Returns a BitSet with bits set for each prime up to n"
(let [bs (new java.util.BitSet n)]
(.flip bs 2 n)
(doseq [i (range 4 n 2)] (.clear bs i))
(doseq [p (range 3 (Math/sqrt n))]
(if (.get bs p)
(doseq [q (range (* p p) n (* 2 p))] (.clear bs q))))
bs))
Running this on a 2014 Macbook Pro (2.3GHz Core i7), I get:
user=> (time (do (sieve 1e6) nil))
"Elapsed time: 64.936 msecs"
See the last example here:
http://clojuredocs.org/clojure_core/clojure.core/lazy-seq
;; An example combining lazy sequences with higher order functions
;; Generate prime numbers using Eratosthenes Sieve
;; See http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes
;; Note that the starting set of sieved numbers should be
;; the set of integers starting with 2 i.e., (iterate inc 2)
(defn sieve [s]
(cons (first s)
(lazy-seq (sieve (filter #(not= 0 (mod % (first s)))
(rest s))))))
user=> (take 20 (sieve (iterate inc 2)))
(2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71)
Here's a nice and simple implementation:
http://clj-me.blogspot.com/2008/06/primes.html
... but it is written for some pre-1.0 version of Clojure. See lazy_seqs in Clojure Contrib for one that works with the current version of the language.
(defn sieve
[[p & rst]]
;; make sure the stack size is sufficiently large!
(lazy-seq (cons p (sieve (remove #(= 0 (mod % p)) rst)))))
(def primes (sieve (iterate inc 2)))
with a 10M stack size, I get the 1001th prime in ~ 33 seconds on a 2.1Gz macbook.
So I've just started with Clojure, and yeah, this comes up a lot on Project Euler doesn't it? I wrote a pretty fast trial division prime algorithm, but it doesn't really scale too far before each run of divisions becomes prohibitively slow.
So I started again, this time using the sieve method:
(defn clense
"Walks through the sieve and nils out multiples of step"
[primes step i]
(if (<= i (count primes))
(recur
(assoc! primes i nil)
step
(+ i step))
primes))
(defn sieve-step
"Only works if i is >= 3"
[primes i]
(if (< i (count primes))
(recur
(if (nil? (primes i)) primes (clense primes (* 2 i) (* i i)))
(+ 2 i))
primes))
(defn prime-sieve
"Returns a lazy list of all primes smaller than x"
[x]
(drop 2
(filter (complement nil?)
(persistent! (sieve-step
(clense (transient (vec (range x))) 2 4) 3)))))
Usage and speed:
user=> (time (do (prime-sieve 1E6) nil))
"Elapsed time: 930.881 msecs
I'm pretty happy with the speed: it's running out of a REPL running on a 2009 MBP. It's mostly fast because I completely eschew idiomatic Clojure and instead loop around like a monkey. It's also 4X faster because I'm using a transient vector to work on the sieve instead of staying completely immutable.
Edit: After a couple of suggestions / bug fixes from Will Ness it now runs a whole lot faster.
Here's a simple sieve in Scheme:
http://telegraphics.com.au/svn/puzzles/trunk/programming-in-scheme/primes-up-to.scm
Here's a run for primes up to 10,000:
#;1> (include "primes-up-to.scm")
; including primes-up-to.scm ...
#;2> ,t (primes-up-to 10000)
0.238s CPU time, 0.062s GC time (major), 180013 mutations, 130/4758 GCs (major/minor)
(2 3 5 7 11 13...
Here is a Clojure solution. i is the current number being considered and p is a list of all prime numbers found so far. If division by some prime numbers has a remainder of zero, the number i is not a prime number and recursion occurs with the next number. Otherwise the prime number is added to p in the next recursion (as well as continuing with the next number).
(defn primes [i p]
(if (some #(zero? (mod i %)) p)
(recur (inc i) p)
(cons i (lazy-seq (primes (inc i) (conj p i))))))
(time (do (doall (take 5001 (primes 2 []))) nil))
; Elapsed time: 2004.75587 msecs
(time (do (doall (take 10001 (primes 2 []))) nil))
; Elapsed time: 7700.675118 msecs
Update:
Here is a much slicker solution based on this answer above.
Basically the list of integers starting with two is filtered lazily. Filtering is performed by only accepting a number i if there is no prime number dividing the number with remainder of zero. All prime numbers are tried where the square of the prime number is less or equal to i.
Note that primes is used recursively but Clojure manages to prevent endless recursion. Also note that the lazy sequence primes caches results (that's why the performance results are a bit counter intuitive at first sight).
(def primes
(lazy-seq
(filter (fn [i] (not-any? #(zero? (rem i %))
(take-while #(<= (* % %) i) primes)))
(drop 2 (range)))))
(time (first (drop 10000 primes)))
; Elapsed time: 542.204211 msecs
(time (first (drop 20000 primes)))
; Elapsed time: 786.667644 msecs
(time (first (drop 40000 primes)))
; Elapsed time: 1780.15807 msecs
(time (first (drop 40000 primes)))
; Elapsed time: 8.415643 msecs
Based on Will's comment, here is my take on postponed-primes:
(defn postponed-primes-recursive
([]
(concat (list 2 3 5 7)
(lazy-seq (postponed-primes-recursive
{}
3
9
(rest (rest (postponed-primes-recursive)))
9))))
([D p q ps c]
(letfn [(add-composites
[D x s]
(loop [a x]
(if (contains? D a)
(recur (+ a s))
(persistent! (assoc! (transient D) a s)))))]
(loop [D D
p p
q q
ps ps
c c]
(if (not (contains? D c))
(if (< c q)
(cons c (lazy-seq (postponed-primes-recursive D p q ps (+ 2 c))))
(recur (add-composites D
(+ c (* 2 p))
(* 2 p))
(first ps)
(* (first ps) (first ps))
(rest ps)
(+ c 2)))
(let [s (get D c)]
(recur (add-composites
(persistent! (dissoc! (transient D) c))
(+ c s)
s)
p
q
ps
(+ c 2))))))))
Initial submission for comparison:
Here is my attempt to port this prime number generator from Python to Clojure. The below returns an infinite lazy sequence.
(defn primes
[]
(letfn [(prime-help
[foo bar]
(loop [D foo
q bar]
(if (nil? (get D q))
(cons q (lazy-seq
(prime-help
(persistent! (assoc! (transient D) (* q q) (list q)))
(inc q))))
(let [factors-of-q (get D q)
key-val (interleave
(map #(+ % q) factors-of-q)
(map #(cons % (get D (+ % q) (list)))
factors-of-q))]
(recur (persistent!
(dissoc!
(apply assoc! (transient D) key-val)
q))
(inc q))))))]
(prime-help {} 2)))
Usage:
user=> (first (primes))
2
user=> (second (primes))
3
user=> (nth (primes) 100)
547
user=> (take 5 (primes))
(2 3 5 7 11)
user=> (time (nth (primes) 10000))
"Elapsed time: 409.052221 msecs"
104743
edit:
Performance comparison, where postponed-primes uses a queue of primes seen so far rather than a recursive call to postponed-primes:
user=> (def counts (list 200000 400000 600000 800000))
#'user/counts
user=> (map #(time (nth (postponed-primes) %)) counts)
("Elapsed time: 1822.882 msecs"
"Elapsed time: 3985.299 msecs"
"Elapsed time: 6916.98 msecs"
"Elapsed time: 8710.791 msecs"
2750161 5800139 8960467 12195263)
user=> (map #(time (nth (postponed-primes-recursive) %)) counts)
("Elapsed time: 1776.843 msecs"
"Elapsed time: 3874.125 msecs"
"Elapsed time: 6092.79 msecs"
"Elapsed time: 8453.017 msecs"
2750161 5800139 8960467 12195263)
Idiomatic, and not too bad
(def primes
(cons 1 (lazy-seq
(filter (fn [i]
(not-any? (fn [p] (zero? (rem i p)))
(take-while #(<= % (Math/sqrt i))
(rest primes))))
(drop 2 (range))))))
=> #'user/primes
(first (time (drop 10000 primes)))
"Elapsed time: 0.023135 msecs"
=> 104729
From: http://steloflute.tistory.com/entry/Clojure-%ED%94%84%EB%A1%9C%EA%B7%B8%EB%9E%A8-%EC%B5%9C%EC%A0%81%ED%99%94
Using Java array
(defmacro loopwhile [init-symbol init whilep step & body]
`(loop [~init-symbol ~init]
(when ~whilep ~#body (recur (+ ~init-symbol ~step)))))
(defn primesUnderb [limit]
(let [p (boolean-array limit true)]
(loopwhile i 2 (< i (Math/sqrt limit)) 1
(when (aget p i)
(loopwhile j (* i 2) (< j limit) i (aset p j false))))
(filter #(aget p %) (range 2 limit))))
Usage and speed:
user=> (time (def p (primesUnderb 1e6)))
"Elapsed time: 104.065891 msecs"
After coming to this thread and searching for a faster alternative to those already here, I am surprised nobody linked to the following article by Christophe Grand :
(defn primes3 [max]
(let [enqueue (fn [sieve n factor]
(let [m (+ n (+ factor factor))]
(if (sieve m)
(recur sieve m factor)
(assoc sieve m factor))))
next-sieve (fn [sieve candidate]
(if-let [factor (sieve candidate)]
(-> sieve
(dissoc candidate)
(enqueue candidate factor))
(enqueue sieve candidate candidate)))]
(cons 2 (vals (reduce next-sieve {} (range 3 max 2))))))
As well as a lazy version :
(defn lazy-primes3 []
(letfn [(enqueue [sieve n step]
(let [m (+ n step)]
(if (sieve m)
(recur sieve m step)
(assoc sieve m step))))
(next-sieve [sieve candidate]
(if-let [step (sieve candidate)]
(-> sieve
(dissoc candidate)
(enqueue candidate step))
(enqueue sieve candidate (+ candidate candidate))))
(next-primes [sieve candidate]
(if (sieve candidate)
(recur (next-sieve sieve candidate) (+ candidate 2))
(cons candidate
(lazy-seq (next-primes (next-sieve sieve candidate)
(+ candidate 2))))))]
(cons 2 (lazy-seq (next-primes {} 3)))))
Plenty of answers already, but I have an alternative solution which generates an infinite sequence of primes. I was also interested on bechmarking a few solutions.
First some Java interop. for reference:
(defn prime-fn-1 [accuracy]
(cons 2
(for [i (range)
:let [prime-candidate (-> i (* 2) (+ 3))]
:when (.isProbablePrime (BigInteger/valueOf prime-candidate) accuracy)]
prime-candidate)))
Benjamin # https://stackoverflow.com/a/7625207/3731823 is primes-fn-2
nha # https://stackoverflow.com/a/36432061/3731823 is primes-fn-3
My implementations is primes-fn-4:
(defn primes-fn-4 []
(let [primes-with-duplicates
(->> (for [i (range)] (-> i (* 2) (+ 5))) ; 5, 7, 9, 11, ...
(reductions
(fn [known-primes candidate]
(if (->> known-primes
(take-while #(<= (* % %) candidate))
(not-any? #(-> candidate (mod %) zero?)))
(conj known-primes candidate)
known-primes))
[3]) ; Our initial list of known odd primes
(cons [2]) ; Put in the non-odd one
(map (comp first rseq)))] ; O(1) lookup of the last element of the vec "known-primes"
; Ugh, ugly de-duplication :(
(->> (map #(when (not= % %2) %) primes-with-duplicates (rest primes-with-duplicates))
(remove nil?))))
Reported numbers (time in milliseconds to count first N primes) are the fastest from the run of 5, no JVM restarts between experiments so your mileage may vary:
1e6 3e6
(primes-fn-1 5) 808 2664
(primes-fn-1 10) 952 3198
(primes-fn-1 20) 1440 4742
(primes-fn-1 30) 1881 6030
(primes-fn-2) 1868 5922
(primes-fn-3) 489 1755 <-- WOW!
(primes-fn-4) 2024 8185
If you don't need a lazy solution and you just want a sequence of primes below a certain limit, the straight forward implementation of the Sieve of Eratosthenes is pretty fast. Here's my version using transients:
(defn classic-sieve
"Returns sequence of primes less than N"
[n]
(loop [nums (transient (vec (range n))) i 2]
(cond
(> (* i i) n) (remove nil? (nnext (persistent! nums)))
(nums i) (recur (loop [nums nums j (* i i)]
(if (< j n)
(recur (assoc! nums j nil) (+ j i))
nums))
(inc i))
:else (recur nums (inc i)))))
I just started using Clojure so I don't know if it's good but here is my solution:
(defn divides? [x i]
(zero? (mod x i)))
(defn factors [x]
(flatten (map #(list % (/ x %))
(filter #(divides? x %)
(range 1 (inc (Math/floor (Math/sqrt x))))))))
(defn prime? [x]
(empty? (filter #(and divides? (not= x %) (not= 1 %))
(factors x))))
(def primes
(filter prime? (range 2 java.lang.Integer/MAX_VALUE)))
(defn sum-of-primes-below [n]
(reduce + (take-while #(< % n) primes)))