Main question: I view the most significant application of tail call optimization (TCO) as a translation of a recursive call into a loop (in cases in which the recursive call has a certain form). More precisely, when translated into a machine language, this would usually be translation into some sort of series of jumps. Some Common Lisp and Scheme compilers that compile to native code (e.g. SBCL) can identify tail-recursive code and perform this translation. JVM-based Lisps such as Clojure and ABCL have trouble doing this. What is it about the JVM as a machine that prevents or makes this difficult? I don't get it. The JVM obviously has no problem with loops. It's the compiler that has to figure out how to do TCO, not the machine to which it compiles.
Related question: Clojure can translate seemingly recursive code into a loop: It acts as if it's performing TCO, if the programmer replaces the tail call to the function with the keyword recur. But if it's possible to get a compiler to identify tail calls--as SBCL and CCL do, for example--then why can't the Clojure compiler figure out that it's supposed to treat a tail call the way it treats recur?
(Sorry--this is undoubtably a FAQ, and I'm sure that the remarks above show my ignorance, but I was unsuccessful in finding earlier questions.)
Real TCO works for arbitrary calls in tail position, not just self calls, so that code like the following does not cause a stack overflow:
(letfn [(e? [x] (or (zero? x) (o? (dec x))))
(o? [x] (e? (dec x)))]
(e? 10))
Clearly you'd need JVM support for this, since programs running on the JVM cannot manipulate the call stack. (Unless you were willing to establish your own calling convention and impose the associated overhead on function calls; Clojure aims to use regular JVM method calls.)
As for eliminating self calls in tail position, that's a simpler problem which can be solved as long as the entire function body gets compiled to a single JVM method. That is a limiting promise to make, however. Besides, recur is fairly well liked for its explicitness.
There is a reason why the JVM does not support TCO: Why does the JVM still not support tail-call optimization?
However there is a way around this by abusing the heap memory and some trickery explained in the A First-Order One-Pass CPS Transformation paper; it is implemented in Clojure by Chris Frisz and Daniel P. Friedman (see clojure-tco).
Now Rich Hickey could have chosen to do such an optimization by default, Scala does this at some points. Instead he chose to rely on the end user to specify the cases where they can be optimized by Clojure with the trampoline or loop-recur constructs. The decision has been explained here: https://groups.google.com/d/msg/clojure/4bSdsbperNE/tXdcmbiv4g0J
In the final presentation of ClojureConj 2014, Brian Goetz pointed out there is a security feature in the JVM that prevents stack frame collapsing (as that would be an attack vector for people looking to make a function go somewhere else on return).
https://www.youtube.com/watch?v=2y5Pv4yN0b0&index=21&list=PLZdCLR02grLoc322bYirANEso3mmzvCiI
Related
I'm reading some Clojure code at the moment that has a bunch of uninitialised values as nil for a numeric value in a record that gets passed around.
Now lots of the Clojure libraries treat this as idiomatic. Which means that it is an accepted convention.
But it also leads to NullPointerException, because not all the Clojure core functions can handle a nil as input. (Nor should they).
Other languages have the concept of Maybe or Option to proxy the value in the event that it is null, as a way of mitigating the NullPointerException risk. This is possible in Clojure - but not very common.
You can do some tricks with fnil but it doesn't solve every problem.
Another alternative is simply to set the uninitialised value to a symbol like :empty-value to force the user to handle this scenario explicitly in all the handling code. But this isn't really a big step-up from nil - because you don't really discover all the scenarios (in other people's code) until run-time.
My question is: Is there an idiomatic alternative to nil-punning in Clojure?
Not sure if you've read this lispcast post on nil-punning, but I do think it makes a pretty good case for why it's idiomatic and covers various important considerations that I didn't see mentioned in those other SO questions.
Basically, nil is a first-class thing in clojure. Despite its inherent conventional meaning, it is a proper value, and can be treated as such in many contexts, and in a context-dependent way. This makes it more flexible and powerful than null in the host language.
For example, something like this won't even compile in java:
if(null) {
....
}
Where as in clojure, (if nil ...) will work just fine. So there are many situations where you can use nil safely. I'm yet to see a java codebase that isn't littered with code like if(foo != null) { ... everywhere. Perhaps java 8's Optional will change this.
I think where you can run into issues quite easily is in java interop scenarios where you are dealing with actual nulls. A good clojure wrapper library can also help shield you from this in many cases, and its one good reason to prefer one over direct java interop where possible.
In light of this, you may want to re-consider fighting this current. But since you are asking about alternatives, here's one I think is great: prismatic's schema. Schema has a Maybe schema (and many other useful ones as well), and it works quite nicely in many scenarios. The library is quite popular and I have used it with success. FWIW, it is recommended in the recent clojure applied book.
Is there an idiomatic alternative to nil-punning in Clojure?
No. As leeor explains, nil-punning is idiomatic. But it's not as prevalent as in Common Lisp, where (I'm told) an empty list equates to nil.
Clojure used to work this way, but the CL functions that deal with lists correspond to Clojure functions that deal with sequences in general. And these sequences may be lazy, so there is a premium on unifying lazy sequences with others, so that any laziness can be preserved. I think this evolution happened about Clojure 1.2. Rich described it in detail here.
If you want option/maybe types, take a look at the core.typed library. In contrast to Prismatic Schema, this operates at compile time.
I'm in the process of learning Clojure, and I'm using 4Clojure
as a resource. I can solve many of the "easy" questions on the site, but for me thinking in a functional programming mindset still doesn't come naturally (I'm coming from Java). As a result, I use a loop/recur iterative pattern in most of my seq-building implementations because that's how I'm used to thinking.
However, when I look at the answers from more experienced Clojure users, they do things in a much more functional style. For example, in a problem about implementing the range function, my answer was the following:
(fn [start limit]
(loop [x start y limit output '()]
(if (< x y)
(recur (inc x) y (conj output x))
(reverse output))))
While this worked, other users did things like this:
(fn [x y] (take (- y x) (iterate inc x)))
My function is more verbose and I had no idea the "iterate" function even existed. But was my answer worse in an efficiency sense? Is loop/recur somehow worse to use than alternatives? I fear this sort of thing is going to happen a lot to me in the future, as there are still many functions like iterate I don't know about.
The second variant returns a lazy sequence, which may indeed be more efficient, especially if the range is big.
The other thing is that the second solution conveys the idea better. To put it differently, it describes the intent instead of implementation. It takes less time to understand it as compared to your code, where you have to read through the loop body and build a model of control flow in your head.
Regarding the discovery of the new functions: yes, you may not know in advance that some function is already defined. It is easier in, say, Haskell, where you can search for a function by its type signature, but with some experience you will learn to recognize the functional programming patterns like this. You will write the code like the second variant, and then look for something working like take and iterate in the standard library.
Bookmark the Clojure Cheetsheet website, and always have a browser tab open to it.
Study all of the functions, and especially read the examples they link to (the http://clojuredocs.org website).
The site http://clojure-doc.org is also very useful (yes, the two names are almost identical but not quite)
The question should not be about performance (it depends!) but about communication: when using loop/recur or plain recursion or lazy-seq or sometimes even reduce, you make your code harder to understand: because the reader has to understand how you perform your iteration before getting to understand what you are computing.
loop/recur is real Clojure, and idiomatic. It's there for a reason. And often there is no better way. But many people find that once one gets used to it, it's very convenient to build many functions out of building blocks such as iterate. Clojure has a very nice collection of them. I started out writing things from scratch using truly recursive algorithms and then loop/recur. Personally, I wouldn't claim that it's better to use the functional building blocks functions, but I've come to love using them. It's one of the things that's great about Clojure.
(Yes, the many of the building block functions are lazy, as are e.g. for and map, which are more general-purpose. Laziness can be good, but I'm not religious about it. Sometimes it's more efficient. Sometimes it's not. Sometimes it's beautiful. Sometimes it's a pain in the rear. Sometimes all that.)
Loop and recur are not bad - in fact, if you look at the source code for many of the built-in functions, you will find that is what they do - the provided functions are often an abstraction of common patterns which can make your code easier to understand. How you are doing things is typical for many when they first start. How you are approaching this seems correct to me. You are not just writing your solution and moving on. You are writing your solution and then looking at how others have solved the same problem and making a comparison. This is the right road to improvement. Highly recommend that when you find an alternative solution which seems more elegant/efficient/clear, analyse it, look at the source code of the built-in functions it uses and things will slowly come together.
loop ... recur is an optimisation for recursive tail calls, and should
always be used where it applies.
range is lazy, so your version of it should strive to be so.
loop ... recur can't do this.
All the sequence functions that can sensibly be lazy (iterate,
filter, map, take-while ...) are so. As you know, you can use some of these
to build a lazy range. As #cgrand explains, this is the preferred approach.
If you prefer, you can build a lazy range from scratch:
(defn range [x y]
(lazy-seq
(when (< x y)
(cons x (range (inc x) y)))))
I wondered the same thing for some days but truly many tims I do not see any better alternative than loop recur.
Some jobs are not fully "reduce" or "map". It is the case when you update data base on a buffer you mutates at every iteration.
Loop recur is very convienient where "non linear precise work" is require. It looks like more imperative but if I remember well Clojure was designed with pragmatism. Buy yet, pragmatism means choosing what is more effficient.
That is why in complex programs, I use both Clojure and java code mixed. sometimes java is just more clear for "low level" or iterative jobs like taking a specific value and so on while I see Clojure functions more useful for big data processing (without so much level of detail : global filters, etc.).
Some people say that we must stock with Clojure as much as possible but I do not see any reason not to use Java. I did not programmed a lot but Clojure/Java is the best interop I have ever seen, very complementary approaches.
Rich Hickey's Strange Loop transducers presentation tells us that there are two implementations of map in Clojure 1.6, one for sequences in clojure.core and one for channels in core.async.
Now we know that in 1.7 we have transducers, for which a foldr (reduce) function is returned from higher order functions like map and filter when given a function but not a collection.
What I'm trying to articulate and failing, is why core.async functions can't return a sequence, or be Seq-like. I have a feeling that the 'interfaces' (protocols) are different but I can't see how.
Surely if you're taking the first item off a channel then you can represent that as taking the first item off a sequence?
My question is: Could core.async have implemented its functions in terms of sequences?
Yes, in one sense they could have been. If you ignore go blocks (for the moment let's do so), then there's really nothing wrong with something like the following:
(defn chan-seq [ch]
(when-some [v (<!! c)]
(cons v (lazy-seq (chan-seq ch)))))
But notice here the <!! call. This is called "take blocking": inside this function are some promises and locks that will cause the currently executing thread to halt until a value is available on the channel. So this would work fine if you don't mind having a Java thread sitting there doing nothing.
The idea behind go blocks is to make logical processes much cheaper; to accomplish this, the go block rewrites the body of the block into a series of callbacks that are attached to the channel, so that internally a call to <! inside a go block gets turned into something like this (take! c k) where k is a callback to the rest of the go block.
Now if we had true continuations, or if the JVM supported lightweight threads, then yes, we could combine go-blocks and blocking takes. But this currently involves either deep bytecode rewriting (like the Pulsar/Quasar project does) or some non-standard JVM feature. Both of those options were ruled out in the creation of core.async in favor of the much simpler to implement (and hopefully much simpler to reason about) local go block transformation.
Quite often, I swap! an atom value using an anonymous function that uses one or more external values in calculating the new value. There are two ways to do this, one with what I understand is a closure and one not, and my question is which is the better / more efficient way to do it?
Here's a simple made-up example -- adding a variable numeric value to an atom -- showing both approaches:
(def my-atom (atom 0))
(defn add-val-with-closure [n]
(swap! my-atom
(fn [curr-val]
;; we pull 'n' from outside the scope of the function
;; asking the compiler to do some magic to make this work
(+ curr-val n)) ))
(defn add-val-no-closure [n]
(swap! my-atom
(fn [curr-val val-to-add]
;; we bring 'n' into the scope of the function as the second function parameter
;; so no closure is needed
(+ curr-val val-to-add))
n))
This is a made-up example, and of course, you wouldn't actually write this code to solve this specific problem, because:
(swap! my-atom + n)
does the same thing without any need for an additional function.
But in more complicated cases you do need a function, and then the question arises. For me, the two ways of solving the problem are of about equal complexity from a coding perspective. If that's the case, which should I prefer? My working assumption is that the non-closure method is the better one (because it's simpler for the compiler to implement).
There's a third way to solve the problem, which is not to use an anonymous function. If you use a separate named function, then you can't use a closure and the question doesn't arise. But inlining an anonymous function often makes for more readable code, and I'd like to leave that pattern in my toolkit.
Thanks!
edit in response to A. Webb's answer below (this was too long to put into a comment):
My use of the word "efficiency" in the question was misleading. Better words might have been "elegance" or "simplicity."
One of the things that I like about Clojure is that while you can write code to execute any particular algorithm faster in other languages, if you write idiomatic Clojure code it's going to be decently fast, and it's going to be simple, elegant, and maintainable. As the problems you're trying to solve get more complex, the simplicity, elegance and maintainability get more and more important. IMO, Clojure is the most "efficient" tool in this sense for solving a whole range of complex problems.
My question was really -- given that there are two ways that I can solve this problem, what's the more idiomatic and Clojure-esque way of doing it? For me when I ask that question, how 'fast' the two approaches are is one consideration. It's not the most important one, but I still think it's a legitimate consideration if this is a common pattern and the different approaches are a wash from other perspectives. I take A. Webb's answer below to be, "Whoa! Pull back from the weeds! The compiler will handle either approach just fine, and the relative efficiency of each approach is anyway unknowable without getting deeper into the weeds of target platforms and the like. So take your hint from the name of the language and when it makes sense to do so, use closures."
closing edit on April 10, 2014
I'm going to mark A. Webb's answer as accepted, although I'm really accepting A. Webb's answer and omiel's answer -- unfortunately I can't accept them both, and adding my own answer that rolls them up seems just a bit gratuitous.
One of the many things that I love about Clojure is the community of people who work together on it. Learning a computer language doesn't just mean learning code syntax -- more fundamentally it means learning patterns of thinking about and understanding problems. Clojure, and Lisp behind it, has an incredibly powerful set of such patterns. For example, homoiconicity ("code as data") means that you can dynamically generate code at compile time using macros, or destructuring allows you to concisely and readably unpack complex data structures. None of the patterns are unique to Clojure, but Clojure brings them all together in ways that make solving problems a joy. And the only way to learn those patterns is from people who know and use them already. When I first picked Clojure more than a year ago, one of the reasons that I picked it over Scala and other contenders was the reputation of the Clojure community for being helpful and constructive. And I haven't been disappointed -- this exchange around my question, like so many others on StackOverflow and elsewhere, shows how willing the community is to help a newcomer like me -- thank you!
After you figure out the implementation details of the current compiler version for the current version of your current target host, then you'll have to start worrying about the optimizer and the JIT and then the target computer's processors.
You are too deep in the weeds, turn back to the main path.
Closing over free variables when applicable is the natural thing to do and an extremely important idiom. You may assume a language named Clojure has good support for closures.
I prefer the first approach as being simpler (as long as the closure is simple) and somewhat easier to read. I often struggle reading code where you have an anonymous function immediately called with parameters ; I have to resolve to count parentheses to be sure of what's happening, and I feel it's not a good thing.
I think the only way it could be the wrong thing to do is if the closures closes over a value that shouldn't be captured, like the head of a long lazy sequence.
Type-hints can make a huge improvement on execution time where reflection occurs many times. My understanding of type-hints is that it just allows the compiler to cache a reflection lookup. Can that caching occur dynamically? Or is there some reason this would be bad/impossible?
From Programming Clojure:
These warnings indicate that Clojure has no way to know the type of
c. You can provide a type hint to fix this, using the metadata syntax
^Class:
(defn describe-class [#^Class c]
{:name (.getName c)
:final (java.lang.reflect.Modifier/isFinal (.getModifiers c))})
With the type hint in place, the reflection warnings will disappear. The
compiled Clojure code will be exactly the same as compiled Java code.
Further, attempts to call describe-class with something other than a Class
will fail with a ClassCastException.
So to sum up, the reflection cast isn't just cached it is eliminated.
Rich was kind enough to enlighten me:
"The real answer for the JDK proper is JSR 292, the invokedynamic instruction, which allows for the proper construction of call site caches with performance much better
than memoizaton."