Although I've done some small amount of programming in functional languages before, I've just started playing with Clojure. Since doing the same kind of "Hello World" programs gets old when learning a new language, I decided to go through the Cinder "Hello, Cinder" tutorial, translating it to Clojure and Quil along the way. In Chapter 5 of the tutorial, you come across this C++ snippet to calculate acceleration for a list of particles:
void ParticleController::repulseParticles() {
for( list<Particle>::iterator p1 = mParticles.begin(); p1 != mParticles.end(); ++p1 ) {
list<Particle>::iterator p2 = p1;
for( ++p2; p2 != mParticles.end(); ++p2 ) {
Vec2f dir = p1->mLoc - p2->mLoc;
float distSqrd = dir.lengthSquared();
if( distSqrd > 0.0f ){
dir.normalize();
float F = 1.0f/distSqrd;
p1->mAcc += dir * ( F / p1->mMass );
p2->mAcc -= dir * ( F / p2->mMass );
}
}
}
}
In my eyes, this code has one very important characteristic: it is doing comparisons between pairs of particles and updating both particles and then skipping the same combination in the future. This is very important for performance reasons, since this piece of code is executed once every frame and there are potentially thousands of particles on screen at any given time (someone who understands big O better than I do can probably tell you the difference between this method and iterating over every combination multiple times).
For reference, I'll show what I came up with. You should notice that the code below only updates one particle at a time, so I'm doing a lot of "extra" work comparing the same particles twice. (Note: some methods left out for brevity, such as "normalize"):
(defn calculate-acceleration [particle1 particle2]
(let [x-distance-between (- (:x particle1) (:x particle2))
y-distance-between (- (:y particle1) (:y particle2))
distance-squared (+ (* x-distance-between x-distance-between) (* y-distance-between y-distance-between))
normalized-direction (normalize x-distance-between y-distance-between)
force (if (> distance-squared 0) (/ (/ 1.0 distance-squared) (:mass particle1)) 0)]
{:x (+ (:x (:accel particle1)) (* (first normalized-direction) force)) :y (+ (:y (:accel particle1)) (* (second normalized-direction) force))}))
(defn update-acceleration [particle particles]
(assoc particle :accel (reduce #(do {:x (+ (:x %) (:x %2)) :y (+ (:y %) (:y %2))}) {:x 0 :y 0} (for [p particles :when (not= particle p)] (calculate-acceleration particle p)))))
(def particles (map #(update-acceleration % particles) particles))
Update: So here's what I ultimately came up with, in case anyone is interested:
(defn get-new-accelerations [particles]
(let [particle-combinations (combinations particles 2)
new-accelerations (map #(calculate-acceleration (first %) (second %)) particle-combinations)
new-accelerations-grouped (for [p particles]
(filter #(not (nil? %))
(map
#(cond (= (first %) p) %2
(= (second %) p) (vec-scale %2 -1))
particle-combinations new-accelerations)))]
(map #(reduce (fn [accum accel] (if (not (nil? accel)) (vec-add accel accum))) {:x 0 :y 0} %)
new-accelerations-grouped)))
Essentially, the process goes something like this:
particle-combinations: Calculate all combinations of particles using the combinatorics "combinations" function
new-accelerations: Calculate a list of accelerations based on the list of combinations
new-accelerations-grouped: Group up the accelerations for each particle (in order) by looping over every particle and checking the list of combinations, building a list of lists where each sub-list is all of the individual accelerations; there's also the subtlety that if the particle is the first entry in the combination list, it gets the original acceleration, but if it's the second, it gets the opposite acceleration. It then filters out nils
Reduce each sub-list of accelerations to the sum of those accelerations
The question now is, is this any faster than what I was doing before? (I haven't tested it yet, but my initial guess is no way).
Update 2:
Here's another version I came up with. I think this version is much better in all respects than the one I posted above: it uses a transient data structure for performance/easy mutability of the new list, and uses loop/recur. It should be much faster than the example I posted above but I haven't tested yet to verify.
(defn transient-particle-accelerations [particles]
(let [num-of-particles (count particles)]
(loop [i 0 new-particles (transient particles)]
(if (< i (- num-of-particles 1))
(do
(loop [j (inc i)]
(if (< j num-of-particles)
(let [p1 (nth particles i)
p2 (nth particles j)
new-p1 (nth new-particles i)
new-p2 (nth new-particles j)
new-acceleration (calculate-acceleration p1 p2)]
(assoc! new-particles i (assoc new-p1 :accel (vec-add (:accel new-p1) new-acceleration)))
(assoc! new-particles j (assoc new-p2 :accel (vec-add (:accel new-p2) (vec-scale new-acceleration -1))))
(recur (inc j)))))
(recur (inc i) new-particles))
(persistent! new-particles)))))
Re-def-ing particles when you want to update them doesn't seem quite right -- I'm guessing that using a ref to store the state of the world, and then updating that ref between cycles, would make more sense.
Down to the algorithmic issue, to me like this is a use case for clojure.math.combinatorics. Something like the following:
(require '[clojure.math.combinatorics :as combinatorics])
(defn update-particles [particles]
(apply concat
(for [[p1 p2] (combinatorics/combinations particles 2)
:let [x-distance-between (- (:x p1) (:x p2))
y-distance-between (- (:y p1) (:y p2))
distance-squared (+ (* x-distance-between x-distance-between)
(* y-distance-between y-distance-between))
normalized-direction (normalize x-distance-between y-distance-between)
p1-force (if (> distance-squared 0)
(/ (/ 1.0 distance-squared) (:mass p1))
0)]]
[{:x (+ (:x (:accel p1)) (* (first normalized-direction) p1-force))
:y (+ (:y (:accel p1)) (* (first normalized-direction) p1-force))}
{:x (+ (:x (:accel p2)) (* (first normalized-direction) p2-force))
:y (+ (:y (:accel p2)) (* (first normalized-direction) p2-force))}]))
...you'll still need the reduce, but this way we're pulling updated values for both particles out of the loop.
So, essentially, you want to select all subsets of size two, then operate on each such pair?
Here's a combinatorics library http://richhickey.github.com/clojure-contrib/combinatorics-api.html with
combinations
function
Usage: (combinations items n) All the unique
ways of taking n different elements from items
Use that to generate your list, then iterate over that.
Related
I am trying to implement a solution for minimum-swaps required to sort an array in clojure.
The code works, but takes about a second to solve for the 7 element vector, which is very poor compared to a similar solution in Java. (edited)
I already tried providing the explicit types, but doesnt seem to make a difference
I tried using transients, but has an open bug for subvec, that I am using in my solution- https://dev.clojure.org/jira/browse/CLJ-787
Any pointers on how I can optimize the solution?
;; Find minimumSwaps required to sort the array. The algorithm, starts by iterating from 0 to n-1. In each iteration, it places the least element in the ith position.
(defn minimumSwaps [input]
(loop [mv input, i (long 0), swap-count (long 0)]
(if (< i (count input))
(let [min-elem (apply min (drop i mv))]
(if (not= min-elem (mv i))
(recur (swap-arr mv i min-elem),
(unchecked-inc i),
(unchecked-inc swap-count))
(recur mv,
(unchecked-inc i),
swap-count)))
swap-count)))
(defn swap-arr [vec x min-elem]
(let [y (long (.indexOf vec min-elem))]
(assoc vec x (vec y) y (vec x))))
(time (println (minimumSwaps [7 6 5 4 3 2 1])))
There are a few things that can be improved in your solution, both algorithmically and efficiency-wise. The main improvement is to remember both the minimal element in the vector and its position when you search for it. This allows you to not search for the minimal element again with .indexOf.
Here's my revised solution that is ~4 times faster:
(defn swap-arr [v x y]
(assoc v x (v y) y (v x)))
(defn find-min-and-position-in-vector [v, ^long start-from]
(let [size (count v)]
(loop [i start-from, min-so-far (long (nth v start-from)), min-pos start-from]
(if (< i size)
(let [x (long (nth v i))]
(if (< x min-so-far)
(recur (inc i) x i)
(recur (inc i) min-so-far min-pos)))
[min-so-far min-pos]))))
(defn minimumSwaps [input]
(loop [mv input, i (long 0), swap-count (long 0)]
(if (< i (count input))
(let [[min-elem min-pos] (find-min-and-position-in-vector mv i)]
(if (not= min-elem (mv i))
(recur (swap-arr mv i min-pos),
(inc i),
(inc swap-count))
(recur mv,
(inc i),
swap-count)))
swap-count)))
To understand where are the performance bottlenecks in your program, it is better to use https://github.com/clojure-goes-fast/clj-async-profiler rather than to guess.
Notice how I dropped unchecked-* stuff from your code. It is not as important here, and it is easy to get it wrong. If you want to use them for performance, make sure to check the resulting bytecode with a decompiler: https://github.com/clojure-goes-fast/clj-java-decompiler
A similar implementation in java, runs almost in half the time.
That's actually fairly good for Clojure, given that you use immutable vectors where in Java you probably use arrays. After rewriting the Clojure solution to arrays, the performance would be almost the same.
Given vector size n and window size k, how can I efficiently calculate the sliding window minimum in n log k time? ie, for vector [1 4 3 2 5 4 2] and window size 2, the output would be [1 3 2 2 4 2].
Obviously I can do it using partition and map but that that's n * k time.
I think I need to keep track of the minimum in a sorted map, and update the map when it's outside the window. But although I can get the min of a sorted map in log time, searching through the map to find any indexes that are expired is not log time.
Thanks.
You can solve this is with a priority queue based on Clojure's priority map data structure. We index the values in the window with their position in the vector.
The value of its first entry is the window minimum.
We add the new entry and get rid of the oldest one by key/vector-position.
A possible implementation is
(use [clojure.data.priority-map :only [priority-map]])
(defn windowed-min [k coll]
(let [numbered (map-indexed list coll)
[head tail] (split-at k numbered)
init-win (into (priority-map) head)
win-seq (reductions
(fn [w [i n]]
(-> w (dissoc (- i k)) (assoc i n)))
init-win
tail)]
(map (comp val first) win-seq)))
For example,
(windowed-min 2 [1 4 3 2 5 4 2])
=> (1 3 2 2 4 2)
The solution is developed lazily, so can be applied to an endless sequence.
After the initialisation, which is O(k), the function computes each element in the sequence in O(log k) time, as noted here.
You can solve in linear time --O(n), rather than O(n*log k)) as described by 1. http://articles.leetcode.com/sliding-window-maximum/ (easily change from find max to find min) and 2. https://people.cs.uct.ac.za/~ksmith/articles/sliding_window_minimum.html
The approaches needs a double ended queue to manage previous values which uses O(1) time for most queue operations (i.e. push/pop/peek, etc.) rather than O(log K) when using Priority Queue (i.e. Priority Map). I used a double ended queue from https://github.com/pjstadig/deque-clojure
Main Code to implement code in 1st reference above (for min rather than max):
(defn windowed-min-queue [w a]
(let [
deque-init (fn deque-init [] (reduce (fn [dq i]
(dq-push-back i (prune-back a i dq)))
empty-deque (range w)))
process-min (fn process-min [dq] (reductions (fn [q i]
(->> q
(prune-back a i)
(prune-front i w)
(dq-push-back i)))
dq (range w (count a))))
init (deque-init)
result (process-min init)] ;(process-min init)]
(map #(nth a (dq-front %)) result)))
Comparing the speed of this method to the other solution that uses a Priority Map we have (note: I liked the other solution since as well since its simpler).
; Test using Random arrays of data
(def N 1000000)
(def a (into [] (take N (repeatedly #(rand-int 50)))))
(def b (into [] (take N (repeatedly #(rand-int 50)))))
(def w 1024)
; Solution based upon Priority Map (see other solution which is also great since its simpler)
(time (doall (windowed-min-queue w a)))
;=> "Elapsed time: 1820.526521 msecs"
; Solution based upon double-ended queue
(time (doall (windowed-min w b)))
;=> "Elapsed time: 8290.671121 msecs"
Which is over a 4x faster, which is great considering the PriorityMap is written in Java while the double-ended queue code is pure Clojure (see https://github.com/pjstadig/deque-clojure)
Including the other wrappers/utilities used on the double-ended queue for reference.
(defn dq-push-front [e dq]
(conj dq e))
(defn dq-push-back [e dq]
(proto/inject dq e))
(defn dq-front [dq]
(first dq))
(defn dq-pop-front [dq]
(pop dq))
(defn dq-pop-back [dq]
(proto/eject dq))
(defn deque-empty? [dq]
(identical? empty-deque dq))
(defn dq-back [dq]
(proto/last dq))
(defn dq-front [dq]
(first dq))
(defn prune-back [a i dq]
(cond
(deque-empty? dq) dq
(< (nth a i) (nth a (dq-back dq))) (recur a i (dq-pop-back dq))
:else dq))
(defn prune-front [i w dq]
(cond
(deque-empty? dq) dq
(<= (dq-front dq) (- i w)) (recur i w (dq-pop-front dq))
:else dq))
My solution uses two auxillary maps to achieve fast performance. I map the keys to their values and also store the values to their occurrences in a sorted map. Upon each move of the window, I update the maps, and get the minimum of the sorted map, all in log time.
The downside is the code is a lot uglier, not lazy, and not idiomatic. The upside is that it outperforms the priority-map solution by about 2x. I think a lot of that though, can be blamed on the laziness of the solution above.
(defn- init-aux-maps [w v]
(let [sv (subvec v 0 w)
km (->> sv (map-indexed vector) (into (sorted-map)))
vm (->> sv frequencies (into (sorted-map)))]
[km vm]))
(defn- update-aux-maps [[km vm] j x]
(let [[ai av] (first km)
km (-> km (dissoc ai) (assoc j x))
vm (if (= (vm av) 1) (dissoc vm av) (update vm av dec))
vm (if (nil? (get vm x)) (assoc vm x 1) (update vm x inc))]
[km vm]))
(defn- get-minimum [[_ vm]] (ffirst vm))
(defn sliding-minimum [w v]
(loop [i 0, j w, am (init-aux-maps w v), acc []]
(let [acc (conj acc (get-minimum am))]
(if (< j (count v))
(recur (inc i) (inc j) (update-aux-maps am j (v j)) acc)
acc))))
I have completed this problem on hackerrank and my solution passes most test cases but it is not fast enough for 4 out of the 11 test cases.
My solution looks like this:
(ns scratch.core
(require [clojure.string :as str :only (split-lines join split)]))
(defn ascii [char]
(int (.charAt (str char) 0)))
(defn process [text]
(let [parts (split-at (int (Math/floor (/ (count text) 2))) text)
left (first parts)
right (if (> (count (last parts)) (count (first parts)))
(rest (last parts))
(last parts))]
(reduce (fn [acc i]
(let [a (ascii (nth left i))
b (ascii (nth (reverse right) i))]
(if (> a b)
(+ acc (- a b))
(+ acc (- b a))))
) 0 (range (count left)))))
(defn print-result [[x & xs]]
(prn x)
(if (seq xs)
(recur xs)))
(let [input (slurp "/Users/paulcowan/Downloads/input10.txt")
inputs (str/split-lines input)
length (read-string (first inputs))
texts (rest inputs)]
(time (print-result (map process texts))))
Can anyone give me any advice about what I should look at to make this faster?
Would using recursion instead of reduce be faster or maybe this line is expensive:
right (if (> (count (last parts)) (count (first parts)))
(rest (last parts))
(last parts))
Because I am getting a count twice.
You are redundantly calling reverse on every iteration of the reduce:
user=> (let [c [1 2 3]
noisey-reverse #(doto (reverse %) println)]
(reduce (fn [acc e] (conj acc (noisey-reverse c) e))
[]
[:a :b :c]))
(3 2 1)
(3 2 1)
(3 2 1)
[(3 2 1) :a (3 2 1) :b (3 2 1) :c]
The reversed value could be calculated inside the containing let, and would then only need to be calculated once.
Also, due to the way your parts is defined, you are doing linear time lookups with each call to nth. It would be better to put parts in a vector and do indexed lookup. In fact you wouldn't need a reversed parts, and could do arithmetic based on the count of the vector to find the item to look up.
I wrote a binary search function as part of a larger program, but it seems to be slower than it should be and profiling shows a lot of calls to methods in clojure.lang.Numbers.
My understanding is that Clojure can use primitives when it can determine that it can do so. The calls to the methods in clojure.lang.Numbers seems to indicate that it's not using primitives here.
If I coerce the loop variables to ints, it properly complains that the recur arguments are not primitive. If i coerce those too, the code works again but again it's slow. My only guess is that (quot (+ low-idx high-idx) 2) is not producing a primitive but I'm not sure where to go from here.
This is my first program in Clojure so feel free to let me know if there are more cleaner/functional/Clojure ways to do something.
(defn binary-search
[coll coll-size target]
(let [cnt (dec coll-size)]
(loop [low-idx 0 high-idx cnt]
(if (> low-idx high-idx)
nil
(let [mid-idx (quot (+ low-idx high-idx) 2) mid-val (coll mid-idx)]
(cond
(= mid-val target) mid-idx
(< mid-val target) (recur (inc mid-idx) high-idx)
(> mid-val target) (recur low-idx (dec mid-idx))
))))))
(defn binary-search-perf-test
[test-size]
(do
(let [test-set (vec (range 1 (inc test-size))) test-set-size (count test-set)]
(time (count (map #(binary-search2 test-set test-set-size %) test-set)))
)))
First of all, you can use the binary search implementation provided by java.util.Collections:
(java.util.Collections/binarySearch [0 1 2 3] 2 compare)
; => 2
If you skip the compare, the search will be faster still, unless the collection includes bigints, in which case it'll break.
As for your pure Clojure implementation, you can hint coll-size with ^long in the parameter vector -- or maybe just ask for the vector's size at the beginning of the function's body (that's a very fast, constant time operation), replace the (quot ... 2) call with (bit-shift-right ... 1) and use unchecked math for the index calculations. With some additional tweaks a binary search could be written as follows:
(defn binary-search
"Finds earliest occurrence of x in xs (a vector) using binary search."
([xs x]
(loop [l 0 h (unchecked-dec (count xs))]
(if (<= h (inc l))
(cond
(== x (xs l)) l
(== x (xs h)) h
:else nil)
(let [m (unchecked-add l (bit-shift-right (unchecked-subtract h l) 1))]
(if (< (xs m) x)
(recur (unchecked-inc m) h)
(recur l m)))))))
This is still noticeably slower than the Java variant:
(defn java-binsearch [xs x]
(java.util.Collections/binarySearch xs x compare))
binary-search as defined above seems to take about 25% more time than this java-binsearch.
in Clojure 1.2.x you can only coerce local variables and they can't cross functions calls.
starting in Clojure 1.3.0 Clojure can use primative numbers across function calls but not through Higher Order Functions such as map.
if you are using clojure 1.3.0+ then you should be able to accomplish this using type hints
as with any clojure optimization problem the first step is to turn on (set! *warn-on-reflection* true) then add type hints until it no longer complains.
user=> (set! *warn-on-reflection* true)
true
user=> (defn binary-search
[coll coll-size target]
(let [cnt (dec coll-size)]
(loop [low-idx 0 high-idx cnt]
(if (> low-idx high-idx)
nil
(let [mid-idx (quot (+ low-idx high-idx) 2) mid-val (coll mid-idx)]
(cond
(= mid-val target) mid-idx
(< mid-val target) (recur (inc mid-idx) high-idx)
(> mid-val target) (recur low-idx (dec mid-idx))
))))))
NO_SOURCE_FILE:23 recur arg for primitive local: low_idx is not matching primitive,
had: Object, needed: long
Auto-boxing loop arg: low-idx
#'user/binary-search
user=>
to remove this you can type hint the coll-size argument
(defn binary-search
[coll ^long coll-size target]
(let [cnt (dec coll-size)]
(loop [low-idx 0 high-idx cnt]
(if (> low-idx high-idx)
nil
(let [mid-idx (quot (+ low-idx high-idx) 2) mid-val (coll mid-idx)]
(cond
(= mid-val target) mid-idx
(< mid-val target) (recur (inc mid-idx) high-idx)
(> mid-val target) (recur low-idx (dec mid-idx))
))))))
it is understandably difficult to connect the auto-boxing on line 10 to the coll-size parameter because it goes through cnt then high-idx then mid-ixd and so on, so I generally approach these problems by type-hinting everything until I find the one that makes the warnings go away, then remove hints so long as they stay gone
As a neophyte clojurian, it was recommended to me that I go through the Project Euler problems as a way to learn the language. Its definitely a great way to improve your skills and gain confidence. I just finished up my answer to problem #14. It works fine, but to get it running efficiently I had to implement some memoization. I couldn't use the prepackaged memoize function because of the way my code was structured, and I think it was a good experience to roll my own anyways. My question is if there is a good way to encapsulate my cache within the function itself, or if I have to define an external cache like I have done. Also, any tips to make my code more idiomatic would be appreciated.
(use 'clojure.test)
(def mem (atom {}))
(with-test
(defn chain-length
([x] (chain-length x x 0))
([start-val x c]
(if-let [e (last(find #mem x))]
(let [ret (+ c e)]
(swap! mem assoc start-val ret)
ret)
(if (<= x 1)
(let [ret (+ c 1)]
(swap! mem assoc start-val ret)
ret)
(if (even? x)
(recur start-val (/ x 2) (+ c 1))
(recur start-val (+ 1 (* x 3)) (+ c 1)))))))
(is (= 10 (chain-length 13))))
(with-test
(defn longest-chain
([] (longest-chain 2 0 0))
([c max start-num]
(if (>= c 1000000)
start-num
(let [l (chain-length c)]
(if (> l max)
(recur (+ 1 c) l c)
(recur (+ 1 c) max start-num))))))
(is (= 837799 (longest-chain))))
Since you want the cache to be shared between all invocations of chain-length, you would write chain-length as (let [mem (atom {})] (defn chain-length ...)) so that it would only be visible to chain-length.
In this case, since the longest chain is sufficiently small, you could define chain-length using the naive recursive method and use Clojure's builtin memoize function on that.
Here's an idiomatic(?) version using plain old memoize.
(def chain-length
(memoize
(fn [n]
(cond
(== n 1) 1
(even? n) (inc (chain-length (/ n 2)))
:else (inc (chain-length (inc (* 3 n))))))))
(defn longest-chain [start end]
(reduce (fn [x y]
(if (> (second x) (second y)) x y))
(for [n (range start (inc end))]
[n (chain-length n)])))
If you have an urge to use recur, consider map or reduce first. They often do what you want, and sometimes do it better/faster, since they take advantage of chunked seqs.
(inc x) is like (+ 1 x), but inc is about twice as fast.
You can capture the surrounding environment in a clojure :
(defn my-memoize [f]
(let [cache (atom {})]
(fn [x]
(let [cy (get #cache x)]
(if (nil? cy)
(let [fx (f x)]
(reset! cache (assoc #cache x fx)) fx) cy)))))
(defn mul2 [x] (do (print "Hello") (* 2 x)))
(def mmul2 (my-memoize mul2))
user=> (mmul2 2)
Hello4
user=> (mmul2 2)
4
You see the mul2 funciton is only called once.
So the 'cache' is captured by the clojure and can be used to store the values.