What is the purpose of clojure.core.reducers/reduce? - clojure

A slightly modified version of reduce was introduced with reducers, clojure.core.reducers/reduce (short r/reduce):
(defn reduce
([f coll]
(reduce f (f) coll))
([f init coll]
(if (instance? java.util.Map coll)
(clojure.core.protocols/kv-reduce coll f init)
(clojure.core.protocols/coll-reduce coll f init))))
r/reduce differs from its core sibling only in that it uses (f) as the initial value when none is provided, and it delegates to core reduce-kv for maps.
I don’t understand what use such an odd special-purpose reduce might be and why it was worth including in the reducers library.
Curiously, r/reduce is not mentioned in the two introductory blog posts as far as I can tell (first, second). The official documentation notes
In general most users will not call r/reduce directly and instead should prefer r/fold (...) However, it may be useful to execute an eager reduce with fewer intermediate results.
I’m unsure what that last sentence hints at.
What situations can r/reduce handle that the core reduces cannot? When would I reach for r/reduce with conviction?

Two possible reasons:
It has different – better! – semantics than clojure.core/reduce in the initless sequential case. During his 2014 Conj presentation Rich Hickey asked "who knows what the semantics of reduce are when you call it with a collection and no initial value?" – follow this link for the exact spot in the presentation – and then described the said semantics as "a ridiculous, complex rule" and "one of the worst things [he] ever copied from Common Lisp" – cf. Common Lisp's reduce contract. The presentation was about transducers and the context of the remark was a discussion of transduce, which has a superior, simpler contract; r/reduce does as well.
Even without considering the above, it's sort of nice to have a version of reduce with a contract very close to that of fold. That enables simple "try one, try the other" benchmarking with the same arguments, as well as simply changing one's mind.

Related

Clojure loop/recur pattern, is it bad to use?

I'm in the process of learning Clojure, and I'm using 4Clojure
as a resource. I can solve many of the "easy" questions on the site, but for me thinking in a functional programming mindset still doesn't come naturally (I'm coming from Java). As a result, I use a loop/recur iterative pattern in most of my seq-building implementations because that's how I'm used to thinking.
However, when I look at the answers from more experienced Clojure users, they do things in a much more functional style. For example, in a problem about implementing the range function, my answer was the following:
(fn [start limit]
(loop [x start y limit output '()]
(if (< x y)
(recur (inc x) y (conj output x))
(reverse output))))
While this worked, other users did things like this:
(fn [x y] (take (- y x) (iterate inc x)))
My function is more verbose and I had no idea the "iterate" function even existed. But was my answer worse in an efficiency sense? Is loop/recur somehow worse to use than alternatives? I fear this sort of thing is going to happen a lot to me in the future, as there are still many functions like iterate I don't know about.
The second variant returns a lazy sequence, which may indeed be more efficient, especially if the range is big.
The other thing is that the second solution conveys the idea better. To put it differently, it describes the intent instead of implementation. It takes less time to understand it as compared to your code, where you have to read through the loop body and build a model of control flow in your head.
Regarding the discovery of the new functions: yes, you may not know in advance that some function is already defined. It is easier in, say, Haskell, where you can search for a function by its type signature, but with some experience you will learn to recognize the functional programming patterns like this. You will write the code like the second variant, and then look for something working like take and iterate in the standard library.
Bookmark the Clojure Cheetsheet website, and always have a browser tab open to it.
Study all of the functions, and especially read the examples they link to (the http://clojuredocs.org website).
The site http://clojure-doc.org is also very useful (yes, the two names are almost identical but not quite)
The question should not be about performance (it depends!) but about communication: when using loop/recur or plain recursion or lazy-seq or sometimes even reduce, you make your code harder to understand: because the reader has to understand how you perform your iteration before getting to understand what you are computing.
loop/recur is real Clojure, and idiomatic. It's there for a reason. And often there is no better way. But many people find that once one gets used to it, it's very convenient to build many functions out of building blocks such as iterate. Clojure has a very nice collection of them. I started out writing things from scratch using truly recursive algorithms and then loop/recur. Personally, I wouldn't claim that it's better to use the functional building blocks functions, but I've come to love using them. It's one of the things that's great about Clojure.
(Yes, the many of the building block functions are lazy, as are e.g. for and map, which are more general-purpose. Laziness can be good, but I'm not religious about it. Sometimes it's more efficient. Sometimes it's not. Sometimes it's beautiful. Sometimes it's a pain in the rear. Sometimes all that.)
Loop and recur are not bad - in fact, if you look at the source code for many of the built-in functions, you will find that is what they do - the provided functions are often an abstraction of common patterns which can make your code easier to understand. How you are doing things is typical for many when they first start. How you are approaching this seems correct to me. You are not just writing your solution and moving on. You are writing your solution and then looking at how others have solved the same problem and making a comparison. This is the right road to improvement. Highly recommend that when you find an alternative solution which seems more elegant/efficient/clear, analyse it, look at the source code of the built-in functions it uses and things will slowly come together.
loop ... recur is an optimisation for recursive tail calls, and should
always be used where it applies.
range is lazy, so your version of it should strive to be so.
loop ... recur can't do this.
All the sequence functions that can sensibly be lazy (iterate,
filter, map, take-while ...) are so. As you know, you can use some of these
to build a lazy range. As #cgrand explains, this is the preferred approach.
If you prefer, you can build a lazy range from scratch:
(defn range [x y]
(lazy-seq
(when (< x y)
(cons x (range (inc x) y)))))
I wondered the same thing for some days but truly many tims I do not see any better alternative than loop recur.
Some jobs are not fully "reduce" or "map". It is the case when you update data base on a buffer you mutates at every iteration.
Loop recur is very convienient where "non linear precise work" is require. It looks like more imperative but if I remember well Clojure was designed with pragmatism. Buy yet, pragmatism means choosing what is more effficient.
That is why in complex programs, I use both Clojure and java code mixed. sometimes java is just more clear for "low level" or iterative jobs like taking a specific value and so on while I see Clojure functions more useful for big data processing (without so much level of detail : global filters, etc.).
Some people say that we must stock with Clojure as much as possible but I do not see any reason not to use Java. I did not programmed a lot but Clojure/Java is the best interop I have ever seen, very complementary approaches.

Could core.async have implemented its functions in terms of sequences?

Rich Hickey's Strange Loop transducers presentation tells us that there are two implementations of map in Clojure 1.6, one for sequences in clojure.core and one for channels in core.async.
Now we know that in 1.7 we have transducers, for which a foldr (reduce) function is returned from higher order functions like map and filter when given a function but not a collection.
What I'm trying to articulate and failing, is why core.async functions can't return a sequence, or be Seq-like. I have a feeling that the 'interfaces' (protocols) are different but I can't see how.
Surely if you're taking the first item off a channel then you can represent that as taking the first item off a sequence?
My question is: Could core.async have implemented its functions in terms of sequences?
Yes, in one sense they could have been. If you ignore go blocks (for the moment let's do so), then there's really nothing wrong with something like the following:
(defn chan-seq [ch]
(when-some [v (<!! c)]
(cons v (lazy-seq (chan-seq ch)))))
But notice here the <!! call. This is called "take blocking": inside this function are some promises and locks that will cause the currently executing thread to halt until a value is available on the channel. So this would work fine if you don't mind having a Java thread sitting there doing nothing.
The idea behind go blocks is to make logical processes much cheaper; to accomplish this, the go block rewrites the body of the block into a series of callbacks that are attached to the channel, so that internally a call to <! inside a go block gets turned into something like this (take! c k) where k is a callback to the rest of the go block.
Now if we had true continuations, or if the JVM supported lightweight threads, then yes, we could combine go-blocks and blocking takes. But this currently involves either deep bytecode rewriting (like the Pulsar/Quasar project does) or some non-standard JVM feature. Both of those options were ruled out in the creation of core.async in favor of the much simpler to implement (and hopefully much simpler to reason about) local go block transformation.

Were transducers in the Reducers library in Clojure 1.5 all along?

I heard a comment made today:
"Tranducers were there all along, they came with the reducers in 1.5"
Indeed - Richs's Anatomy of a Reducer blog entry, bears remarkable resemblance to the logic used in his Strange Loop Transducers talk. (Replace 'transformers' with 'transducers').
My question is: Were transducers in the Reducers library in Clojure 1.5 all along?
Pointy is correct, the Idea what there though not accessible as it's own thing. Specifically map filter reduce etc. where not yet capable of producing a transducer and into chan sequence where not available to consume them, so in my opinioin it is safe to say that transducers where not present in Clojure < 1.6.0

Clojure closure efficiency?

Quite often, I swap! an atom value using an anonymous function that uses one or more external values in calculating the new value. There are two ways to do this, one with what I understand is a closure and one not, and my question is which is the better / more efficient way to do it?
Here's a simple made-up example -- adding a variable numeric value to an atom -- showing both approaches:
(def my-atom (atom 0))
(defn add-val-with-closure [n]
(swap! my-atom
(fn [curr-val]
;; we pull 'n' from outside the scope of the function
;; asking the compiler to do some magic to make this work
(+ curr-val n)) ))
(defn add-val-no-closure [n]
(swap! my-atom
(fn [curr-val val-to-add]
;; we bring 'n' into the scope of the function as the second function parameter
;; so no closure is needed
(+ curr-val val-to-add))
n))
This is a made-up example, and of course, you wouldn't actually write this code to solve this specific problem, because:
(swap! my-atom + n)
does the same thing without any need for an additional function.
But in more complicated cases you do need a function, and then the question arises. For me, the two ways of solving the problem are of about equal complexity from a coding perspective. If that's the case, which should I prefer? My working assumption is that the non-closure method is the better one (because it's simpler for the compiler to implement).
There's a third way to solve the problem, which is not to use an anonymous function. If you use a separate named function, then you can't use a closure and the question doesn't arise. But inlining an anonymous function often makes for more readable code, and I'd like to leave that pattern in my toolkit.
Thanks!
edit in response to A. Webb's answer below (this was too long to put into a comment):
My use of the word "efficiency" in the question was misleading. Better words might have been "elegance" or "simplicity."
One of the things that I like about Clojure is that while you can write code to execute any particular algorithm faster in other languages, if you write idiomatic Clojure code it's going to be decently fast, and it's going to be simple, elegant, and maintainable. As the problems you're trying to solve get more complex, the simplicity, elegance and maintainability get more and more important. IMO, Clojure is the most "efficient" tool in this sense for solving a whole range of complex problems.
My question was really -- given that there are two ways that I can solve this problem, what's the more idiomatic and Clojure-esque way of doing it? For me when I ask that question, how 'fast' the two approaches are is one consideration. It's not the most important one, but I still think it's a legitimate consideration if this is a common pattern and the different approaches are a wash from other perspectives. I take A. Webb's answer below to be, "Whoa! Pull back from the weeds! The compiler will handle either approach just fine, and the relative efficiency of each approach is anyway unknowable without getting deeper into the weeds of target platforms and the like. So take your hint from the name of the language and when it makes sense to do so, use closures."
closing edit on April 10, 2014
I'm going to mark A. Webb's answer as accepted, although I'm really accepting A. Webb's answer and omiel's answer -- unfortunately I can't accept them both, and adding my own answer that rolls them up seems just a bit gratuitous.
One of the many things that I love about Clojure is the community of people who work together on it. Learning a computer language doesn't just mean learning code syntax -- more fundamentally it means learning patterns of thinking about and understanding problems. Clojure, and Lisp behind it, has an incredibly powerful set of such patterns. For example, homoiconicity ("code as data") means that you can dynamically generate code at compile time using macros, or destructuring allows you to concisely and readably unpack complex data structures. None of the patterns are unique to Clojure, but Clojure brings them all together in ways that make solving problems a joy. And the only way to learn those patterns is from people who know and use them already. When I first picked Clojure more than a year ago, one of the reasons that I picked it over Scala and other contenders was the reputation of the Clojure community for being helpful and constructive. And I haven't been disappointed -- this exchange around my question, like so many others on StackOverflow and elsewhere, shows how willing the community is to help a newcomer like me -- thank you!
After you figure out the implementation details of the current compiler version for the current version of your current target host, then you'll have to start worrying about the optimizer and the JIT and then the target computer's processors.
You are too deep in the weeds, turn back to the main path.
Closing over free variables when applicable is the natural thing to do and an extremely important idiom. You may assume a language named Clojure has good support for closures.
I prefer the first approach as being simpler (as long as the closure is simple) and somewhat easier to read. I often struggle reading code where you have an anonymous function immediately called with parameters ; I have to resolve to count parentheses to be sure of what's happening, and I feel it's not a good thing.
I think the only way it could be the wrong thing to do is if the closures closes over a value that shouldn't be captured, like the head of a long lazy sequence.

Why can't tail calls be optimized in JVM-based Lisps?

Main question: I view the most significant application of tail call optimization (TCO) as a translation of a recursive call into a loop (in cases in which the recursive call has a certain form). More precisely, when translated into a machine language, this would usually be translation into some sort of series of jumps. Some Common Lisp and Scheme compilers that compile to native code (e.g. SBCL) can identify tail-recursive code and perform this translation. JVM-based Lisps such as Clojure and ABCL have trouble doing this. What is it about the JVM as a machine that prevents or makes this difficult? I don't get it. The JVM obviously has no problem with loops. It's the compiler that has to figure out how to do TCO, not the machine to which it compiles.
Related question: Clojure can translate seemingly recursive code into a loop: It acts as if it's performing TCO, if the programmer replaces the tail call to the function with the keyword recur. But if it's possible to get a compiler to identify tail calls--as SBCL and CCL do, for example--then why can't the Clojure compiler figure out that it's supposed to treat a tail call the way it treats recur?
(Sorry--this is undoubtably a FAQ, and I'm sure that the remarks above show my ignorance, but I was unsuccessful in finding earlier questions.)
Real TCO works for arbitrary calls in tail position, not just self calls, so that code like the following does not cause a stack overflow:
(letfn [(e? [x] (or (zero? x) (o? (dec x))))
(o? [x] (e? (dec x)))]
(e? 10))
Clearly you'd need JVM support for this, since programs running on the JVM cannot manipulate the call stack. (Unless you were willing to establish your own calling convention and impose the associated overhead on function calls; Clojure aims to use regular JVM method calls.)
As for eliminating self calls in tail position, that's a simpler problem which can be solved as long as the entire function body gets compiled to a single JVM method. That is a limiting promise to make, however. Besides, recur is fairly well liked for its explicitness.
There is a reason why the JVM does not support TCO: Why does the JVM still not support tail-call optimization?
However there is a way around this by abusing the heap memory and some trickery explained in the A First-Order One-Pass CPS Transformation paper; it is implemented in Clojure by Chris Frisz and Daniel P. Friedman (see clojure-tco).
Now Rich Hickey could have chosen to do such an optimization by default, Scala does this at some points. Instead he chose to rely on the end user to specify the cases where they can be optimized by Clojure with the trampoline or loop-recur constructs. The decision has been explained here: https://groups.google.com/d/msg/clojure/4bSdsbperNE/tXdcmbiv4g0J
In the final presentation of ClojureConj 2014, Brian Goetz pointed out there is a security feature in the JVM that prevents stack frame collapsing (as that would be an attack vector for people looking to make a function go somewhere else on return).
https://www.youtube.com/watch?v=2y5Pv4yN0b0&index=21&list=PLZdCLR02grLoc322bYirANEso3mmzvCiI