How to parallelize Clojure keep function? - clojure

I'm trying to parallelize the function below. I refactored this from a for statement and implemented pmap to speed up reading the xml data, which went well. The next bottleneck is in my keep statement. How can I improve performance here?
I've tried (keep #(when (pmap #(later-date? (second %) after) zip) [(first %) (second %)]) zip) but nested #() functions are not allowed. I've also tried wrapping the keep as well as the call to raw-url-data in a future but dereferencing either in the calling function produces nil.
(defn- raw-url-data
"Parse xmlzip data and return a sequence of URLs/modtime vectors."
[data after]
(let [article (xz/xml-> data :url)
loc (pmap #(-> (xz/xml-> % :loc xz/text) first) article)
mod (pmap #(-> (xz/xml-> % :lastmod xz/text) first
parse-modtime) article)
zip (zipmap loc mod)]
(keep #(when (later-date? (second %) after)
[(first %) (second %)]) zip)))
And here is my later-date? function:
(defn- later-date?
"Return TRUE if DATETIME is after AFTER or either one is NIL."
[datetime after]
(or (nil? datetime)
(nil? after)
(time/after? datetime after)))

With this type of problem getting the time spent splitting the data up for parallel processing and then putting it back together to be less than the time to process it in a sequence can be tricky.
In the problem above, if i'm interpreting it correctly you are generating two sequences of data, each in parallel. So these sequences can't communicate with each other during this process to see if they have a later date. Once all of the data for both sequences is finished then you form it into a map. and then split that map back into a sequence and start processing it.
The first pair of dates, (first loc) and (first mob), will be sitting for quite a while before they can be compared to see if they should go into the final result. so the best speedup may come from simply removing the call to zipmap.
time/after? is very fast so you will almost certainly loose time by calling pmap here, though it's good to know how to do it anyway. You can get aroung the inability of the anonymous function macro to handle nested anonymous functions by making one of tham a call to fn like so:
(keep (fn [x] (when (pmap #(later-date? (second x) after) zip)) [(first %) (second %)])
Another approach is to
break it into partitions,
do all the processing on each partition, and
merge them back together.
Then adjust the partition size until you see a benefit over the splitting costs.
This has been discussed here, and here

Related

How to implement a parallel logical or with early termination in Clojure

I would like to define a predicate that, taking as input some predicates
with corresponding inputs (they could be given as a lazy sequence of calls),
runs them in parallel and computes the logical or of the results,
in such a way that, the moment a predicate call terminates returning true,
the whole computation also terminates (returning true).
Apart from offering time optimization, this would also help avoiding
non-termination in some cases (some predicate calls may not terminate).
Actually, interpreting non-termination as a third undefined value,
this predicate simulates the or operation in Kleene's K3 logic
(the join in the initial centered Kleene algebra).
Something similar is presented here for the Haskell family.
Is there any (preferably simple) way to do this in Clojure?
EDIT: I decided to add some clarifications after reading the comments.
(a) First of all, what happens after the thread pool gets exhausted is of less importance. I think creating a thread pool large enough for our needs is a reasonable convention.
(b) The most crucial requirement is that the predicate calls start running in parallel and, once a predicate call terminates returning true, all the other threads running get interrupted. The intended behavior is that:
if there is a predicate call returning true: the parallel or returns true
else if there is a predicate call that does not terminate: the parallel or does not terminate
else: the parallel or returns false
In other words, it behaves like the join in the 3-element lattice given by false<undefined<true, with undefined representing non-termination.
(c) The parallel or should be able to take as input many predicates and many predicate-inputs (each one corresponding to a predicate). But it would be even better if it took as input a lazy sequence. Then, naming the parallel or pany (for "parallel any"), we could have calls like the following:
(pany (map (comp eval list) predicates inputs))
(pany (map (comp eval list) predicates (repeat input)))
(pany (map (comp eval list) (repeat predicate) inputs)) which is equivalent to (pany (map predicate (unchunk inputs)))
As a final remark, I think that it is quite natural to ask for things like pany, a dual pall or a mechanism for building such early-terminating parallel reductions to be easily implementable or even built-in in a parallelism-oriented language like Clojure.
I will define our predicates in terms of a reducing function. Practically, we could reimplement all of the Clojure iteration functions to support this parallel operation, but I'll just use reduce as an example.
I'll define a computation function. I'll just use the same one, but nothing stopping you from having many. The function is "true" if it accumulates 1000.
(defn computor [acc val]
(let [new (+' acc val)] (if (> new 1000) (reduced new) new)))
(reduce computor 0 (range))
;; =>
1035
(reduce computor 0 (range Long/MIN_VALUE 0))
;; =>
;; ...this is a proxy for a non-returning computation
;; wrap these up in a form suitable for application of reduction
(def predicates [[computor 0 (range)]
[computor 0 (range Long/MIN_VALUE 0)]])
Now let's get to the meat of this. I want to take a step in each computation, and if one of the computations completes, I want to return it. In actual fact one step at a time using pmap is very slow - the units of work are too small to be worth threading. Here I've changed things to do 1000 iterations of each unit of work before moving on. You'd probably tune this based on your workload and the cost of a step.
(defn p-or-reducer* [reductions]
(let [splits (map #(split-at 1000 %) reductions) ;; do at least 1000 iterations per cycle
complete (some #(if (empty? (second %)) (last (first %))) splits)]
(or complete (recur (map second splits)))))
I then wrap this in a driver.
(defn p-or [s]
(p-or-reducer* (map #(apply reductions %) s)))
(p-or predicates)
;; =>
1035
Where to insert the CPU parallelism? s/map/pmap/ in p-or-reducer* should do it. I suggest just parallelising the first operation, as this will drive the reducing sequences to compute.
(defn p-or-reducer* [reductions]
(let [splits (pmap #(split-at 1000 %) reductions) ;; do at least 1000 iterations per cycle
complete (some #(if (empty? (second %)) (last (first %))) splits)]
(or complete (recur (map second splits)))))
(def parallelism-tester (conj (vec (repeat 40000 [computor 0 (range Long/MIN_VALUE 0)]))
[computor 0 (range)]))
(p-or parallelism-tester) ;; terminates even though the first 40K predicates will not
It's extremely hard to define a performant generic version of this. Without knowing the cost per iteration an efficient parallelism strategy is hard to derive - if one iteration take 10s then we'd probably take a single step at a time. If it takes 100ns then we need to take many steps at a time.
Will you consider adopting core.async to handle parallel tasks with async/go or async/thread, and early return with async/alts!?
For example, to turn the core or function from serial into parallel. We can create a macro (I called it por) to wrap the input functions (or predicates) into async/thread and then do a socket select async/alts! on top of them:
(defmacro por [& fns]
`(let [[v# c#] (async/alts!!
[~#(for [f fns]
(list `async/thread f))])]
v#))
(time
(por (do (println "running a") (Thread/sleep 30) :a)
(do (println "running b") (Thread/sleep 20) :b)
(do (println "running c") (Thread/sleep 10) :c)))
;; running a
;; running b
;; running c
;; "Elapsed time: 11.919169 msecs"
;; => :c
In comparison with the original or (which run in serial):
(time
(or (do (println "running a") (Thread/sleep 30) :a)
(do (println "running b") (Thread/sleep 20) :b)
(do (println "running c") (Thread/sleep 10) :c)))
;; running a
;; => :a
;; "Elapsed time: 31.642506 msecs"

clojure "look-and say" sequence

Working on some 2015 AoC problems to learn clojure... The below was quick enough for the 40th iteration, but crawls to a halt much after that. I compared to a few other peoples solutions and it's not immediately obvious to me why this is so slow. I tried to use recur believing it to be about as efficient as a loop (and to avoid stack consumption). One thing I'm not 100% clear on is if there is a sigifnicant difference between just using recur, versus using loop recur. I tested it both ways and saw no difference.
(def data "3113322113")
(defn encode-string [data results count]
(let [prev (first data)
curr (second data)]
(cond (empty? data) results
(not= prev curr)
(recur (rest data) (str results count prev) 1)
:else (recur (rest data) results (inc count)))))
(count
(nth (iterate #(encode-string % "" 1) data) 40 #_50))
An example of a solution I benchmarked against is Bruce Hauman's, which is really nice:
(defn count-encode [x]
(apply str
(mapcat
(juxt count first)
(partition-by identity x))))
I realize in my solution I am iterating over very large strings repeatedly, but I don't see how Bruce's is so much faster since although he is not explicitly iterating, partition is probably iterating behind the scenes.
Your version is computing something like
(str "11" (str "22" (str "31" ...)))
which is building up a brand-new String object for every two characters. Since this involves iterating over every character in the input and output strings at each step, your operation is quadratic in the length of the string.
The solution you're comparing to is different: it builds up a lazy sequence of integers, which is a linear-time process. Then, it does something like
(apply str [1 1 2 2 3 1])
which is the same as
(str 1 1 2 2 3 1 ...)
and str, when called with multiple arguments, uses a StringBuilder to efficiently build up the result incrementally, an optimization that is not available if you demand a full-fledged String object at every intermediate step. As a result, the whole process is linear-time, rather than quadratic.

How to forget head(GC'd) for lazy-sequences in Clojure?

Let's say I have a huge lazy seq and I want to iterate it so I can process on the data that I get during the iteration.
The thing is I want to lose head(GC'd) of lazy seq(that processed) so I can work on seqs that have millions of data without having OutofMemoryException.
I have 3 examples that I'm not sure.
Could you provide best practices(examples) for that purpose?
Do these functions lose head?
Example 1
(defn lose-head-fn
[lazy-seq-coll]
(when (seq (take 1 lazy-seq-coll))
(do
;;do some processing...(take 10000 lazy-seq-coll)
(recur (drop 10000 lazy-seq-coll)))))
Example 2
(defn lose-head-fn
[lazy-seq-coll]
(loop [i lazy-seq-coll]
(when (seq (take 1 i))
(do
;;do some processing...(take 10000 i)
(recur (drop 10000 i))))))
Example 3
(doseq [i lazy-seq-coll]
;;do some processing...
)
Update: Also there is an explanation in this answer here
copy of my above comments
As far as I know, all of the above would lose head (first two are obvious, since you manually drop the head, while doseq's doc claims that it doesn't retain head).
That means that if the lazy-seq-coll you pass to the function isn't bound somewhere else with def or let and used later, there should be nothing to worry about. So (lose-head-fn (range)) won't eat all your memory, while
(def r (range))
(lose-head-fn r)
probably would.
And the only best practice I could think of is not to def possibly infinite (or just huge) sequences, because all of their realized items would live forever in the var.
In general, you must be careful not to retain a reference either locally or globally for a part of a lazy seq that precedes another which involves excessive computation.
For example:
(let [nums (range)
first-ten (take 10 nums)]
(+ (last first-ten) (nth nums 100000000)))
=> 100000009
This takes about 2 seconds on a modern machine. How about this though? The difference is the last line, where the order of arguments to + is swapped:
;; Don't do this!
(let [nums (range)
first-ten (take 10 nums)]
(+ (nth nums 100000000) (last first-ten)))
You'll hear your chassis/cpu fans come to life, and if you're running htop or similar, you'll see memory usage grow rather quickly (about 1G in the first several seconds for me).
What's going on?
Much like a linked list, elements in a lazy seq in clojure reference the portion of the seq that comes next. In the second example above, first-ten is needed for the second argument to +. Thus, even though nth is happy to hold no references to anything (after all, it's just finding an index in a long list), first-ten refers to a portion of the sequence that, as stated above, must hold onto references to the rest of the sequence.
The first example, by contrast, computes (last first-ten), and after this, first-ten is no longer used. Now the only reference to any portion of the lazy sequence is nums. As nth does its work, each portion of the list that it's finished with is no longer needed, and since nothing else refers to the list in this block, as nth walks the list, the memory taken by the sequence that has been examined can be garbage collected.
Consider this:
;; Don't do this!
(let [nums (range)]
(time (nth nums 1e8))
(time (nth nums 1e8)))
Why does this have a similar result as the second example above? Because the sequence will be cached (held in memory) on the first realization of it (the first (time (nth nums 1e8))), because nums is being used on the next line. If, instead, we use a different sequence for the second nth, then there is no need to cache the first one, so it can be discarded as it's processed:
(let [nums (range)]
(time (nth nums 1e8))
(time (nth (range) 1e8)))
"Elapsed time: 2127.814253 msecs"
"Elapsed time: 2042.608043 msecs"
So as you work with large lazy seqs, consider whether anything is still pointing to the list, and if anything is (global vars being a common one), then it will be held in memory.

In Clojure, are lazy seqs always chunked?

I was under the impression that the lazy seqs were always chunked.
=> (take 1 (map #(do (print \.) %) (range)))
(................................0)
As expected 32 dots are printed because the lazy seq returned by range is chunked into 32 element chunks. However, when instead of range I try this with my own function get-rss-feeds, the lazy seq is no longer chunked:
=> (take 1 (map #(do (print \.) %) (get-rss-feeds r)))
(."http://wholehealthsource.blogspot.com/feeds/posts/default")
Only one dot is printed, so I guess the lazy-seq returned by get-rss-feeds is not chunked. Indeed:
=> (chunked-seq? (seq (range)))
true
=> (chunked-seq? (seq (get-rss-feeds r)))
false
Here is the source for get-rss-feeds:
(defn get-rss-feeds
"returns a lazy seq of urls of all feeds; takes an html-resource from the enlive library"
[hr]
(map #(:href (:attrs %))
(filter #(rss-feed? (:type (:attrs %))) (html/select hr [:link])))
So it appears that chunkiness depends on how the lazy seq is produced. I peeked at the source for the function range and there are hints of it being implemented in a "chunky" manner. So I'm a bit confused as to how this works. Can someone please clarify?
Here's why I need to know.
I have to following code: (get-rss-entry (get-rss-feeds h-res) url)
The call to get-rss-feeds returns a lazy sequence of URLs of feeds that I need to examine.
The call to get-rss-entry looks for a particular entry (whose :link field matches the second argument of get-rss-entry). It examines the lazy sequence returned by get-rss-feeds. Evaluating each item requires an http request across the network to fetch a new rss feed. To minimize the number of http requests it's important to examine the sequence one-by-one and stop as soon as there is a match.
Here is the code:
(defn get-rss-entry
[feeds url]
(ffirst (drop-while empty? (map #(entry-with-url % url) feeds))))
entry-with-url returns a lazy sequence of matches or an empty sequence if there is no match.
I tested this and it seems to work correctly (evaluating one feed url at a time). But I am worried that somewhere, somehow it will start behaving in a "chunky" way and it will start evaluating 32 feeds at a time. I know there is a way to avoid chunky behavior as discussed here, but it doesn't seem to even be required in this case.
Am I using lazy seq non-idiomatically? Would loop/recur be a better option?
You are right to be concerned. Your get-rss-entry will indeed call entry-with-url more than strictly necessary if the feeds parameter is a collection that returns chunked seqs. For example if feeds is a vector, map will operate on whole chunks at a time.
This problem is addressed directly in Fogus' Joy of Clojure, with the function seq1 defined in chapter 12:
(defn seq1 [s]
(lazy-seq
(when-let [[x] (seq s)]
(cons x (seq1 (rest s))))))
You could use this right where you know you want the most laziness possible, right before you call entry-with-url:
(defn get-rss-entry
[feeds url]
(ffirst (drop-while empty? (map #(entry-with-url % url) (seq1 feeds)))))
Lazy seqs are not always chunked - it depends on how they are produced.
For example, the lazy seq produced by this function is not chunked:
(defn integers-from [n]
(lazy-seq (cons n (do (print \.) (integers-from (inc n))))))
(take 3 (integers-from 3))
=> (..3 .4 5)
But many other clojure built-in functions do produce chunked seqs for performance reasons (e.g. range)
Depending on the vagueness of Chunking seems unwise as you mention above. Explicitly "un chunking" in cases where you really need it not to be chunked is also wise because then if at some other point your code changes in a way that chunkifies it things wont break. On another note, if you need actions to be sequential, agents are a great tool you could send the download functions to an agent then they will be run one at a time and only once regardless of how you evaluate the function. At some point you may want to pmap your sequence and then even un-chunking will not work though using an atom will continue to work correctly.
I have discussed this recently in Can I un-chunk lazy sequences to realize one element at a time? and the conclusion is that if you need to control when items are produced/consumed, you should not use lazy sequences.
For processing you can use transducers, where you control when the next item is processed.
For producing the elements, the ideal approach is to reify ISeq. A practical approach is to use lazy-seq with a single cons call in it whose rest is a recursive call. But notice that this relies on an implementation detail of lazy-seq.

build set lazily in clojure

I've started to learn clojure but I'm having trouble wrapping my mind around certain concepts. For instance, what I want to do here is to take this function and convert it so that it calls get-origlabels lazily.
(defn get-all-origlabels []
(set (flatten (map get-origlabels (range *song-count*)))))
My first attempt used recursion but blew up the stack (song-count is about 10,000). I couldn't figure out how to do it with tail recursion.
get-origlabels returns a set each time it is called, but values are often repeated between calls. What the get-origlabels function actually does is read a file (a different file for each value from 0 to song-count-1) and return the words stored in it in a set.
Any pointers would be greatly appreciated!
Thanks!
-Philip
You can get what you want by using mapcat. I believe putting it into an actual Clojure set is going to de-lazify it, as demonstrated by the fact that (take 10 (set (iterate inc 0))) tries to realize the whole set before taking 10.
(distinct (mapcat get-origlabels (range *song-count*)))
This will give you a lazy sequence. You can verify that by doing something like, starting with an infinite sequence:
(->> (iterate inc 0)
(mapcat vector)
(distinct)
(take 10))
You'll end up with a seq, rather than a set, but since it sounds like you really want laziness here, I think that's for the best.
This may have better stack behavior
(defn get-all-origlabels []
(reduce (fn (s x) (union s (get-origlabels x))) ${} (range *song-count*)))
I'd probably use something like:
(into #{} (mapcat get-origlabels (range *song-count*)))
In general, "into" is very helpful when constructing Clojure data structures. I have this mental image of a conveyor belt (a sequence) dropping a bunch of random objects into a large bucket (the target collection).