Why does wrapping this in a function take 10x longer to run? - clojure

Can anyone explain why the time jumps by an order of magnitude simply by wrapping this in a function?
user> (time (loop [n 0 t 0]
(if (= n 10000000)
t
(recur (inc n) (+ t n)))))
"Elapsed time: 29.312145 msecs"
49999995000000
user> (defn tl [r] (loop [n 0 t 0]
(if (= n r)
t
(recur (inc n) (+ t n)))))
#<Var#54dd6004: #object[user$eval3462$tl__3463 0x7d8ba46 "user$eval3462$tl__3463#7d8ba46"]>
user> (time (tl 10000000))
"Elapsed time: 507.333844 msecs"
49999995000000
I'm curious how a simply iteration like this can be made much faster. For example, a similar iterative loop in C++ takes less than 1 ms in Release mode, or about 20 ms in Debug mode on the same system as this Clojure code.

This happens because in second case passed argument is being boxed. Add type hint to fix this:
user> (defn tl [^long r]
(loop [n 0 t 0]
(if (= n r)
t
(recur (inc n) (+ t n)))))
user> (time (tl 10000000))
"Elapsed time: 20.268396 msecs"
49999995000000
UPD:
1, 2) In the first case you operate with java primitives, that's why it's so fast. ^Integer won't work here, because it's type hint for boxed type java.lang.Integer (that's why it's capitalized). ^long is type hint exactly for java long primitive. For function parameters you may do only ^long and ^double primitive type hints (others are not supported besides of these you may do type hints for all kinds of primitive arrays, like ^floats, ^bytes etc.).
3) Since passed argument is boxed this forces generic aritmethics for all operations. In other words, every + and inc operation will create new object on heap (in case of primitives, they will remain on stack).
UPD 2:
As an alternative to type hinting you may explicitly convert passed argument to primitive before the loop:
user> (defn tl [r]
(let [r (long r)]
(loop [n 0 t 0]
(if (= n r)
t
(recur (inc n) (+ t n))))))
user> (time (tl 10000000))
"Elapsed time: 18.907161 msecs"
49999995000000

Related

Why does reducing this lazy sequence slow down this Clojure program 20x?

I have a Clojure program that returns the sum of a lazy sequence of even Fibonacci numbers below n:
(defn sum-of-even-fibonaccis-below-1 [n]
(defn fib [a b] (lazy-seq (cons a (fib b (+ b a)))))
(reduce + (take-while (partial >= n) (take-nth 3 (fib 0 1)))))
(time (dotimes [n 1000] (sum-of-even-fibonaccis-below-1 4000000))) ;; => "Elapsed time: 98.764msecs"
It's not very efficient. But if I don't reduce the sequence and simply return a list of the values (0 2 8 34 144...) it can do its job 20x faster:
(defn sum-of-even-fibonaccis-below-2 [n]
(defn fib [a b] (lazy-seq (cons a (fib b (+ b a)))))
(take-while (partial >= n) (take-nth 3 (fib 0 1))))
(time (dotimes [n 1000] (sum-of-even-fibonaccis-below-2 4000000))) ;; => "Elapsed time: 5.145msecs"
Why is reduce so costly to this lazy Fibonacci sequence, and how can I speed it up without abandoning idiomatic Clojure?
The difference in the execution time is a result of lazyness. In sum-of-even-fibonaccis-below-2 you only produce a lazy seq of Fibonacci numbers which is not realised (dotimes only calls sum-of-even-fibonaccis-below-2 to create a lazy sequence, but does not evaluate all of its contents). So in fact your second time expression doesn't return a list of values but a lazy seq that will produce its elements only when you ask for them.
To force realisation of the lazy sequence you can use dorun if you don't need to preserve it as a value or doall if you want to get the realised seq (be careful with inifinite seqs).
If you measure the second case with sum-of-even-fibonaccis-below-2 wrapped in dorun you will get time comparable to sum-of-even-fibonaccis-below-1.
Results from my machine:
(time (dotimes [n 1000] (sum-of-even-fibonaccis-below-1 4000000))) ;; => "Elapsed time: 8.544193 msecs"
(time (dotimes [n 1000] (dorun (sum-of-even-fibonaccis-below-2 4000000)))) ;; => "Elapsed time: 8.012638 msecs"
I also noticed that you defined your fib function with defn inside another defn. You shouldn't do that as defn will always define function at the top level in your namespace. So your code should look like:
(defn fib [a b] (lazy-seq (cons a (fib b (+ b a)))))
(defn sum-of-even-fibonaccis-below-1 [n]
(reduce + (take-while (partial >= n) (take-nth 3 (fib 0 1)))))
(defn sum-of-even-fibonaccis-below-2 [n]
(take-while (partial >= n) (take-nth 3 (fib 0 1))))
If you do want to define a locally scoped function you can take a look at letfn.
Comment
You can refactor your functions - and give them better names - thus:
(defn fib [a b] (lazy-seq (cons a (fib b (+ b a)))))
(defn even-fibonaccis-below [n]
(take-while (partial >= n) (take-nth 3 (fib 0 1))))
(defn sum-of-even-fibonaccis-below [n]
(reduce + (even-fibonaccis-below n)))
This is easier to understand and therefore to answer.

why is this looping function so slow compared to map?

I looked at maps source code which basically keeps creating lazy sequences. I would think that iterating over a collection and adding to a transient vector would be faster, but clearly it isn't. What don't I understand about clojures performance behavior?
;=> (time (do-with / (range 1 1000) (range 1 1000)))
;"Elapsed time: 23.1808 msecs"
;
; vs
;=> (time (doall (map #(/ %1 %2) (range 1 1000) (range 1 1000))))
;"Elapsed time: 2.604174 msecs"
(defn do-with
[fn coll1 coll2]
(let [end (count coll1)]
(loop [i 0
res (transient [])]
(if
(= i end)
(persistent! res)
(let [x (nth coll1 i)
y (nth coll2 i)
r (fn x y)]
(recur (inc i) (conj! res r)))
))))
In order of conjectured impact on relative results:
Your do-with function uses nth to access the individual items in the input collections. nth operates in linear time on ranges, making do-with quadratic. Needless to say, this will kill performance on large collections.
range produces chunked seqs and map handles those extremely efficiently. (Essentially it produces chunks of up to 32 elements -- here it will in fact be exactly 32 -- by running a tight loop over the internal array of each input chunk in turn, placing results in internal arrays of output chunks.)
Benchmarking with time doesn't give you steady state performance. (Which is why one should really use a proper benchmarking library; in the case of Clojure, Criterium is the standard solution.)
Incidentally, (map #(/ %1 %2) xs ys) can simply be written as (map / xs ys).
Update:
I've benchmarked the map version, the original do-with and a new do-with version with Criterium, using (range 1 1000) as both inputs in each case (as in the question text), obtaining the following mean execution times:
;;; (range 1 1000)
new do-with 170.383334 µs
(doall (map ...)) 230.756753 µs
original do-with 15.624444 ms
Additionally, I've repeated the benchmark using a vector stored in a Var as input rather than ranges (that is, with (def r (vec (range 1 1000))) at the start and using r as both collection arguments in each benchmark). Unsurprisingly, the original do-with came in first -- nth is very fast on vectors (plus using nth with a vector avoids all the intermediate allocations involved in seq traversal).
;;; (vec (range 1 1000))
original do-with 73.975419 µs
new do-with 87.399952 µs
(doall (map ...)) 153.493128 µs
Here's the new do-with with linear time complexity:
(defn do-with [f xs ys]
(loop [xs (seq xs)
ys (seq ys)
ret (transient [])]
(if (and xs ys)
(recur (next xs)
(next ys)
(conj! ret (f (first xs) (first ys))))
(persistent! ret))))

Why is Clojure much faster than mit-scheme for equivalent functions?

I found this code in Clojure to sieve out first n prime numbers:
(defn sieve [n]
(let [n (int n)]
"Returns a list of all primes from 2 to n"
(let [root (int (Math/round (Math/floor (Math/sqrt n))))]
(loop [i (int 3)
a (int-array n)
result (list 2)]
(if (>= i n)
(reverse result)
(recur (+ i (int 2))
(if (< i root)
(loop [arr a
inc (+ i i)
j (* i i)]
(if (>= j n)
arr
(recur (do (aset arr j (int 1)) arr)
inc
(+ j inc))))
a)
(if (zero? (aget a i))
(conj result i)
result)))))))
Then I wrote the equivalent (I think) code in Scheme (I use mit-scheme)
(define (sieve n)
(let ((root (round (sqrt n)))
(a (make-vector n)))
(define (cross-out t to dt)
(cond ((> t to) 0)
(else
(vector-set! a t #t)
(cross-out (+ t dt) to dt)
)))
(define (iter i result)
(cond ((>= i n) (reverse result))
(else
(if (< i root)
(cross-out (* i i) (- n 1) (+ i i)))
(iter (+ i 2) (if (vector-ref a i)
result
(cons i result))))))
(iter 3 (list 2))))
The timing results are:
For Clojure:
(time (reduce + 0 (sieve 5000000)))
"Elapsed time: 168.01169 msecs"
For mit-scheme:
(time (fold + 0 (sieve 5000000)))
"Elapsed time: 3990 msecs"
Can anyone tell me why mit-scheme is more than 20 times slower?
update: "the difference was in iterpreted/compiled mode. After I compiled the mit-scheme code, it was running comparably fast. – abo-abo Apr 30 '12 at 15:43"
Modern incarnations of the Java Virtual Machine have extremely good performance when compared to interpreted languages. A significant amount of engineering resource has gone into the JVM, in particular the hotspot JIT compiler, highly tuned garbage collection and so on.
I suspect the difference you are seeing is primarily down to that. For example if you look Are the Java programs faster? you can see a comparison of java vs ruby which shows that java outperforms by a factor of 220 on one of the benchmarks.
You don't say what JVM options you are running your clojure benchmark with. Try running java with the -Xint flag which runs in pure interpreted mode and see what the difference is.
Also, it's possible that your example is too small to really warm-up the JIT compiler. Using a larger example may yield an even larger performance difference.
To give you an idea of how much Hotspot is helping you. I ran your code on my MBP 2011 (quad core 2.2Ghz), using java 1.6.0_31 with default opts (-server hotspot) and interpreted mode (-Xint) and see a large difference
; with -server hotspot (best of 10 runs)
>(time (reduce + 0 (sieve 5000000)))
"Elapsed time: 282.322 msecs"
838596693108
; in interpreted mode using -Xint cmdline arg
> (time (reduce + 0 (sieve 5000000)))
"Elapsed time: 3268.823 msecs"
838596693108
As to comparing Scheme and Clojure code, there were a few things to simplify at the Clojure end:
don't rebind the mutable array in loops;
remove many of those explicit primitive coercions, no change in performance. As of Clojure 1.3 literals in function calls compile to primitives if such a function signature is available, and generally the difference in performance is so small that it gets quickly drowned by any other operations happening in a loop;
add a primitive long annotation into the fn signature, thus removing the rebinding of n;
call to Math/floor is not needed -- the int coercion has the same semantics.
Code:
(defn sieve [^long n]
(let [root (int (Math/sqrt n))
a (int-array n)]
(loop [i 3, result (list 2)]
(if (>= i n)
(reverse result)
(do
(when (< i root)
(loop [inc (+ i i), j (* i i)]
(when (>= j n) (aset a j 1) (recur inc (+ j inc)))))
(recur (+ i 2) (if (zero? (aget a i))
(conj result i)
result)))))))

Why is there such a difference in speed between Clojure's loop and iterate methods

I have a question about why there is such a difference in speed between the loop method and the iterate method in clojure. I followed the tutorial in http://www.learningclojure.com/2010/02/clojure-dojo-2-what-about-numbers-that.html and defined two square-root methods using the Heron method:
(defn avg [& nums] (/ (apply + nums) (count nums)))
(defn abs [x] (if (< x 0) (- x) x))
(defn close [a b] (-> a (- b) abs (< 1e-10) ) )
(defn sqrt [num]
(loop [guess 1]
(if (close num (* guess guess))
guess
(recur (avg guess (/ num guess)))
)))
(time (dotimes [n 10000] (sqrt 10))) ;;"Elapsed time: 1169.502 msecs"
;; Calculation using the iterate method
(defn sqrt2 [number]
(first (filter #(close number (* % %))
(iterate #(avg % (/ number %)) 1.0))))
(time (dotimes [n 10000] (sqrt2 10))) ;;"Elapsed time: 184.119 msecs"
There is about a x10 increase in speed between the two methods and I'm wondering what is happening below the surface to cause the two to be so pronouced?
Your results are surprising: normally loop/recur is the fastest construct in Clojure for looping.
I suspect that the JVM JIT has worked out a clever optimisation for the iterate method, but not for the loop/recur version. It's surprising how often this happens when you use clean functional code in Clojure: it seems to be very amenable to optimisation.
Note that you can get a substantial speedup in both versions by explicitly using doubles:
(set! *unchecked-math* true)
(defn sqrt [num]
(loop [guess (double 1)]
(if (close num (* guess guess))
guess
(recur (double (avg guess (/ num guess)))))))
(time (dotimes [n 10000] (sqrt 10)))
=> "Elapsed time: 25.347291 msecs"
(defn sqrt2 [number]
(let [number (double number)]
(first (filter #(close number (* % %))
(iterate #(avg % (/ number %)) 1.0)))))
(time (dotimes [n 10000] (sqrt 10)))
=> "Elapsed time: 32.939526 msecs"
As expected, the loop/recur version now has a slight edge. Results are for Clojure 1.3

Fast Prime Number Generation in Clojure

I've been working on solving Project Euler problems in Clojure to get better, and I've already run into prime number generation a couple of times. My problem is that it is just taking way too long. I was hoping someone could help me find an efficient way to do this in a Clojure-y way.
When I fist did this, I brute-forced it. That was easy to do. But calculating 10001 prime numbers took 2 minutes this way on a Xeon 2.33GHz, too long for the rules, and too long in general. Here was the algorithm:
(defn next-prime-slow
"Find the next prime number, checking against our already existing list"
([sofar guess]
(if (not-any? #(zero? (mod guess %)) sofar)
guess ; Then we have a prime
(recur sofar (+ guess 2))))) ; Try again
(defn find-primes-slow
"Finds prime numbers, slowly"
([]
(find-primes-slow 10001 [2 3])) ; How many we need, initial prime seeds
([needed sofar]
(if (<= needed (count sofar))
sofar ; Found enough, we're done
(recur needed (concat sofar [(next-prime-slow sofar (last sofar))])))))
By replacing next-prime-slow with a newer routine that took some additional rules into account (like the 6n +/- 1 property) I was able to speed things up to about 70 seconds.
Next I tried making a sieve of Eratosthenes in pure Clojure. I don't think I got all the bugs out, but I gave up because it was simply way too slow (even worse than the above, I think).
(defn clean-sieve
"Clean the sieve of what we know isn't prime based"
[seeds-left sieve]
(if (zero? (count seeds-left))
sieve ; Nothing left to filter the list against
(recur
(rest seeds-left) ; The numbers we haven't checked against
(filter #(> (mod % (first seeds-left)) 0) sieve)))) ; Filter out multiples
(defn self-clean-sieve ; This seems to be REALLY slow
"Remove the stuff in the sieve that isn't prime based on it's self"
([sieve]
(self-clean-sieve (rest sieve) (take 1 sieve)))
([sieve clean]
(if (zero? (count sieve))
clean
(let [cleaned (filter #(> (mod % (last clean)) 0) sieve)]
(recur (rest cleaned) (into clean [(first cleaned)]))))))
(defn find-primes
"Finds prime numbers, hopefully faster"
([]
(find-primes 10001 [2]))
([needed seeds]
(if (>= (count seeds) needed)
seeds ; We have enough
(recur ; Recalculate
needed
(into
seeds ; Stuff we've already found
(let [start (last seeds)
end-range (+ start 150000)] ; NOTE HERE
(reverse
(self-clean-sieve
(clean-sieve seeds (range (inc start) end-range))))))))))
This is bad. It also causes stack overflows if the number 150000 is smaller. This despite the fact I'm using recur. That may be my fault.
Next I tried a sieve, using Java methods on a Java ArrayList. That took quite a bit of time, and memory.
My latest attempt is a sieve using a Clojure hash-map, inserting all the numbers in the sieve then dissoc'ing numbers that aren't prime. At the end, it takes the key list, which are the prime numbers it found. It takes about 10-12 seconds to find 10000 prime numbers. I'm not sure it's fully debugged yet. It's recursive too (using recur and loop), since I'm trying to be Lispy.
So with these kind of problems, problem 10 (sum up all primes under 2000000) is killing me. My fastest code came up with the right answer, but it took 105 seconds to do it, and needed quite a bit of memory (I gave it 512 MB just so I wouldn't have to fuss with it). My other algorithms take so long I always ended up stopping them first.
I could use a sieve to calculate that many primes in Java or C quite fast and without using so much memory. I know I must be missing something in my Clojure/Lisp style that's causing the problem.
Is there something I'm doing really wrong? Is Clojure just kinda slow with large sequences? Reading some of the project Euler discussions people have calculated the first 10000 primes in other Lisps in under 100 miliseconds. I realize the JVM may slow things down and Clojure is relatively young, but I wouldn't expect a 100x difference.
Can someone enlighten me on a fast way to calculate prime numbers in Clojure?
Here's another approach that celebrates Clojure's Java interop. This takes 374ms on a 2.4 Ghz Core 2 Duo (running single-threaded). I let the efficient Miller-Rabin implementation in Java's BigInteger#isProbablePrime deal with the primality check.
(def certainty 5)
(defn prime? [n]
(.isProbablePrime (BigInteger/valueOf n) certainty))
(concat [2] (take 10001
(filter prime?
(take-nth 2
(range 1 Integer/MAX_VALUE)))))
The Miller-Rabin certainty of 5 is probably not very good for numbers much larger than this. That certainty is equal to 96.875% certain it's prime (1 - .5^certainty)
I realize this is a very old question, but I recently ended up looking for the same and the links here weren't what I'm looking for (restricted to functional types as much as possible, lazily generating ~every~ prime I want).
I stumbled upon a nice F# implementation, so all credits are his. I merely ported it to Clojure:
(defn gen-primes "Generates an infinite, lazy sequence of prime numbers"
[]
(letfn [(reinsert [table x prime]
(update-in table [(+ prime x)] conj prime))
(primes-step [table d]
(if-let [factors (get table d)]
(recur (reduce #(reinsert %1 d %2) (dissoc table d) factors)
(inc d))
(lazy-seq (cons d (primes-step (assoc table (* d d) (list d))
(inc d))))))]
(primes-step {} 2)))
Usage is simply
(take 5 (gen-primes))
Very late to the party, but I'll throw in an example, using Java BitSets:
(defn sieve [n]
"Returns a BitSet with bits set for each prime up to n"
(let [bs (new java.util.BitSet n)]
(.flip bs 2 n)
(doseq [i (range 4 n 2)] (.clear bs i))
(doseq [p (range 3 (Math/sqrt n))]
(if (.get bs p)
(doseq [q (range (* p p) n (* 2 p))] (.clear bs q))))
bs))
Running this on a 2014 Macbook Pro (2.3GHz Core i7), I get:
user=> (time (do (sieve 1e6) nil))
"Elapsed time: 64.936 msecs"
See the last example here:
http://clojuredocs.org/clojure_core/clojure.core/lazy-seq
;; An example combining lazy sequences with higher order functions
;; Generate prime numbers using Eratosthenes Sieve
;; See http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes
;; Note that the starting set of sieved numbers should be
;; the set of integers starting with 2 i.e., (iterate inc 2)
(defn sieve [s]
(cons (first s)
(lazy-seq (sieve (filter #(not= 0 (mod % (first s)))
(rest s))))))
user=> (take 20 (sieve (iterate inc 2)))
(2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71)
Here's a nice and simple implementation:
http://clj-me.blogspot.com/2008/06/primes.html
... but it is written for some pre-1.0 version of Clojure. See lazy_seqs in Clojure Contrib for one that works with the current version of the language.
(defn sieve
[[p & rst]]
;; make sure the stack size is sufficiently large!
(lazy-seq (cons p (sieve (remove #(= 0 (mod % p)) rst)))))
(def primes (sieve (iterate inc 2)))
with a 10M stack size, I get the 1001th prime in ~ 33 seconds on a 2.1Gz macbook.
So I've just started with Clojure, and yeah, this comes up a lot on Project Euler doesn't it? I wrote a pretty fast trial division prime algorithm, but it doesn't really scale too far before each run of divisions becomes prohibitively slow.
So I started again, this time using the sieve method:
(defn clense
"Walks through the sieve and nils out multiples of step"
[primes step i]
(if (<= i (count primes))
(recur
(assoc! primes i nil)
step
(+ i step))
primes))
(defn sieve-step
"Only works if i is >= 3"
[primes i]
(if (< i (count primes))
(recur
(if (nil? (primes i)) primes (clense primes (* 2 i) (* i i)))
(+ 2 i))
primes))
(defn prime-sieve
"Returns a lazy list of all primes smaller than x"
[x]
(drop 2
(filter (complement nil?)
(persistent! (sieve-step
(clense (transient (vec (range x))) 2 4) 3)))))
Usage and speed:
user=> (time (do (prime-sieve 1E6) nil))
"Elapsed time: 930.881 msecs
I'm pretty happy with the speed: it's running out of a REPL running on a 2009 MBP. It's mostly fast because I completely eschew idiomatic Clojure and instead loop around like a monkey. It's also 4X faster because I'm using a transient vector to work on the sieve instead of staying completely immutable.
Edit: After a couple of suggestions / bug fixes from Will Ness it now runs a whole lot faster.
Here's a simple sieve in Scheme:
http://telegraphics.com.au/svn/puzzles/trunk/programming-in-scheme/primes-up-to.scm
Here's a run for primes up to 10,000:
#;1> (include "primes-up-to.scm")
; including primes-up-to.scm ...
#;2> ,t (primes-up-to 10000)
0.238s CPU time, 0.062s GC time (major), 180013 mutations, 130/4758 GCs (major/minor)
(2 3 5 7 11 13...
Here is a Clojure solution. i is the current number being considered and p is a list of all prime numbers found so far. If division by some prime numbers has a remainder of zero, the number i is not a prime number and recursion occurs with the next number. Otherwise the prime number is added to p in the next recursion (as well as continuing with the next number).
(defn primes [i p]
(if (some #(zero? (mod i %)) p)
(recur (inc i) p)
(cons i (lazy-seq (primes (inc i) (conj p i))))))
(time (do (doall (take 5001 (primes 2 []))) nil))
; Elapsed time: 2004.75587 msecs
(time (do (doall (take 10001 (primes 2 []))) nil))
; Elapsed time: 7700.675118 msecs
Update:
Here is a much slicker solution based on this answer above.
Basically the list of integers starting with two is filtered lazily. Filtering is performed by only accepting a number i if there is no prime number dividing the number with remainder of zero. All prime numbers are tried where the square of the prime number is less or equal to i.
Note that primes is used recursively but Clojure manages to prevent endless recursion. Also note that the lazy sequence primes caches results (that's why the performance results are a bit counter intuitive at first sight).
(def primes
(lazy-seq
(filter (fn [i] (not-any? #(zero? (rem i %))
(take-while #(<= (* % %) i) primes)))
(drop 2 (range)))))
(time (first (drop 10000 primes)))
; Elapsed time: 542.204211 msecs
(time (first (drop 20000 primes)))
; Elapsed time: 786.667644 msecs
(time (first (drop 40000 primes)))
; Elapsed time: 1780.15807 msecs
(time (first (drop 40000 primes)))
; Elapsed time: 8.415643 msecs
Based on Will's comment, here is my take on postponed-primes:
(defn postponed-primes-recursive
([]
(concat (list 2 3 5 7)
(lazy-seq (postponed-primes-recursive
{}
3
9
(rest (rest (postponed-primes-recursive)))
9))))
([D p q ps c]
(letfn [(add-composites
[D x s]
(loop [a x]
(if (contains? D a)
(recur (+ a s))
(persistent! (assoc! (transient D) a s)))))]
(loop [D D
p p
q q
ps ps
c c]
(if (not (contains? D c))
(if (< c q)
(cons c (lazy-seq (postponed-primes-recursive D p q ps (+ 2 c))))
(recur (add-composites D
(+ c (* 2 p))
(* 2 p))
(first ps)
(* (first ps) (first ps))
(rest ps)
(+ c 2)))
(let [s (get D c)]
(recur (add-composites
(persistent! (dissoc! (transient D) c))
(+ c s)
s)
p
q
ps
(+ c 2))))))))
Initial submission for comparison:
Here is my attempt to port this prime number generator from Python to Clojure. The below returns an infinite lazy sequence.
(defn primes
[]
(letfn [(prime-help
[foo bar]
(loop [D foo
q bar]
(if (nil? (get D q))
(cons q (lazy-seq
(prime-help
(persistent! (assoc! (transient D) (* q q) (list q)))
(inc q))))
(let [factors-of-q (get D q)
key-val (interleave
(map #(+ % q) factors-of-q)
(map #(cons % (get D (+ % q) (list)))
factors-of-q))]
(recur (persistent!
(dissoc!
(apply assoc! (transient D) key-val)
q))
(inc q))))))]
(prime-help {} 2)))
Usage:
user=> (first (primes))
2
user=> (second (primes))
3
user=> (nth (primes) 100)
547
user=> (take 5 (primes))
(2 3 5 7 11)
user=> (time (nth (primes) 10000))
"Elapsed time: 409.052221 msecs"
104743
edit:
Performance comparison, where postponed-primes uses a queue of primes seen so far rather than a recursive call to postponed-primes:
user=> (def counts (list 200000 400000 600000 800000))
#'user/counts
user=> (map #(time (nth (postponed-primes) %)) counts)
("Elapsed time: 1822.882 msecs"
"Elapsed time: 3985.299 msecs"
"Elapsed time: 6916.98 msecs"
"Elapsed time: 8710.791 msecs"
2750161 5800139 8960467 12195263)
user=> (map #(time (nth (postponed-primes-recursive) %)) counts)
("Elapsed time: 1776.843 msecs"
"Elapsed time: 3874.125 msecs"
"Elapsed time: 6092.79 msecs"
"Elapsed time: 8453.017 msecs"
2750161 5800139 8960467 12195263)
Idiomatic, and not too bad
(def primes
(cons 1 (lazy-seq
(filter (fn [i]
(not-any? (fn [p] (zero? (rem i p)))
(take-while #(<= % (Math/sqrt i))
(rest primes))))
(drop 2 (range))))))
=> #'user/primes
(first (time (drop 10000 primes)))
"Elapsed time: 0.023135 msecs"
=> 104729
From: http://steloflute.tistory.com/entry/Clojure-%ED%94%84%EB%A1%9C%EA%B7%B8%EB%9E%A8-%EC%B5%9C%EC%A0%81%ED%99%94
Using Java array
(defmacro loopwhile [init-symbol init whilep step & body]
`(loop [~init-symbol ~init]
(when ~whilep ~#body (recur (+ ~init-symbol ~step)))))
(defn primesUnderb [limit]
(let [p (boolean-array limit true)]
(loopwhile i 2 (< i (Math/sqrt limit)) 1
(when (aget p i)
(loopwhile j (* i 2) (< j limit) i (aset p j false))))
(filter #(aget p %) (range 2 limit))))
Usage and speed:
user=> (time (def p (primesUnderb 1e6)))
"Elapsed time: 104.065891 msecs"
After coming to this thread and searching for a faster alternative to those already here, I am surprised nobody linked to the following article by Christophe Grand :
(defn primes3 [max]
(let [enqueue (fn [sieve n factor]
(let [m (+ n (+ factor factor))]
(if (sieve m)
(recur sieve m factor)
(assoc sieve m factor))))
next-sieve (fn [sieve candidate]
(if-let [factor (sieve candidate)]
(-> sieve
(dissoc candidate)
(enqueue candidate factor))
(enqueue sieve candidate candidate)))]
(cons 2 (vals (reduce next-sieve {} (range 3 max 2))))))
As well as a lazy version :
(defn lazy-primes3 []
(letfn [(enqueue [sieve n step]
(let [m (+ n step)]
(if (sieve m)
(recur sieve m step)
(assoc sieve m step))))
(next-sieve [sieve candidate]
(if-let [step (sieve candidate)]
(-> sieve
(dissoc candidate)
(enqueue candidate step))
(enqueue sieve candidate (+ candidate candidate))))
(next-primes [sieve candidate]
(if (sieve candidate)
(recur (next-sieve sieve candidate) (+ candidate 2))
(cons candidate
(lazy-seq (next-primes (next-sieve sieve candidate)
(+ candidate 2))))))]
(cons 2 (lazy-seq (next-primes {} 3)))))
Plenty of answers already, but I have an alternative solution which generates an infinite sequence of primes. I was also interested on bechmarking a few solutions.
First some Java interop. for reference:
(defn prime-fn-1 [accuracy]
(cons 2
(for [i (range)
:let [prime-candidate (-> i (* 2) (+ 3))]
:when (.isProbablePrime (BigInteger/valueOf prime-candidate) accuracy)]
prime-candidate)))
Benjamin # https://stackoverflow.com/a/7625207/3731823 is primes-fn-2
nha # https://stackoverflow.com/a/36432061/3731823 is primes-fn-3
My implementations is primes-fn-4:
(defn primes-fn-4 []
(let [primes-with-duplicates
(->> (for [i (range)] (-> i (* 2) (+ 5))) ; 5, 7, 9, 11, ...
(reductions
(fn [known-primes candidate]
(if (->> known-primes
(take-while #(<= (* % %) candidate))
(not-any? #(-> candidate (mod %) zero?)))
(conj known-primes candidate)
known-primes))
[3]) ; Our initial list of known odd primes
(cons [2]) ; Put in the non-odd one
(map (comp first rseq)))] ; O(1) lookup of the last element of the vec "known-primes"
; Ugh, ugly de-duplication :(
(->> (map #(when (not= % %2) %) primes-with-duplicates (rest primes-with-duplicates))
(remove nil?))))
Reported numbers (time in milliseconds to count first N primes) are the fastest from the run of 5, no JVM restarts between experiments so your mileage may vary:
1e6 3e6
(primes-fn-1 5) 808 2664
(primes-fn-1 10) 952 3198
(primes-fn-1 20) 1440 4742
(primes-fn-1 30) 1881 6030
(primes-fn-2) 1868 5922
(primes-fn-3) 489 1755 <-- WOW!
(primes-fn-4) 2024 8185
If you don't need a lazy solution and you just want a sequence of primes below a certain limit, the straight forward implementation of the Sieve of Eratosthenes is pretty fast. Here's my version using transients:
(defn classic-sieve
"Returns sequence of primes less than N"
[n]
(loop [nums (transient (vec (range n))) i 2]
(cond
(> (* i i) n) (remove nil? (nnext (persistent! nums)))
(nums i) (recur (loop [nums nums j (* i i)]
(if (< j n)
(recur (assoc! nums j nil) (+ j i))
nums))
(inc i))
:else (recur nums (inc i)))))
I just started using Clojure so I don't know if it's good but here is my solution:
(defn divides? [x i]
(zero? (mod x i)))
(defn factors [x]
(flatten (map #(list % (/ x %))
(filter #(divides? x %)
(range 1 (inc (Math/floor (Math/sqrt x))))))))
(defn prime? [x]
(empty? (filter #(and divides? (not= x %) (not= 1 %))
(factors x))))
(def primes
(filter prime? (range 2 java.lang.Integer/MAX_VALUE)))
(defn sum-of-primes-below [n]
(reduce + (take-while #(< % n) primes)))