I've seen the ^:static metadata on quite a few function in the Clojure core.clj source code, e.g. in the definition of seq?:
(def
^{:arglists '([x])
:doc "Return true if x implements ISeq"
:added "1.0"
:static true}
seq? (fn ^:static seq? [x] (instance? clojure.lang.ISeq x)))
What precisely does this metadata do, and why it it used so frequently throughout core.clj?
In the development of Clojure 1.3 Rich wanted to add the ability for functions to return types other than Object. This would allow native math operators to be used without having to cram everything into one function.
The original implementation required functions that supported this to be marked :static. this meta data caused the compiler to produce two versions to the function, one that returned Object and one that returned that specific type. in cases where the compiler determined that the types would always match the more specific version would be used.
This was later made fully automatic so you don't need to add this anymore.
According to the Google Groups thread “Type hinting inconsistencies in 1.3.0”, it’s a no-op.
^:static has been a no-op for a while AFAIK, made unnecessary after changes to vars a while back.
— a May 2011 post by Chas Emerick
Seems it's a new metadata attribute in clojure 1.3. And you can compare the source between 1.3 and 1.2:
http://clojuredocs.org/clojure_core/clojure.core/seq_q
http://clojuredocs.org/clojure_core/1.2.0/clojure.core/seq_q
So I think it has something to do with ^:dynamic which indicates whether the var is allowed for dynamic binding. Just my guess. Not sure until I see document about this attribute.
Related
We have a course whose project is to implement a micro-scheme interpreter in C++. In my implementation, I treat 'if', 'define', 'lambda' as procedures, so it is valid in my implementation to eval 'if', 'define' or 'lambda', and it is also fine to write expressions like '(apply define (quote (a 1)))', which will bind 'a' to 1.
But I find in racket and in mit-scheme, 'if', 'define', 'lambda' are not evaluable. For example,
It seems that they are not procedures, but I cannot figure out what they are and how they are implemented.
Can someone explain these to me?
In the terminology of Lisp, expressions to be evaluated are forms. Compound forms (those which use list syntax) are divided into special forms (headed by special operators like let), macro forms, and function call forms.
The Scheme report desn't use this terminology. It calls functions "procedures". Scheme's special forms are called "syntax". Macros are "derived expression types", individually introduced as "library syntax". (The motivation for this may be some conscious decision to blend into the CS academic mainstream by scrubbing some unfamiliar Lisp terminology. Algol has procedures and a BNF-defined syntax, Scheme has procedures and a BNF-defined syntax. That ticks off some sort of familiarity checkbox.)
Special forms (or "syntax") are recognized by interpreters and compilers as a set of special cases. The interpreter or compiler may handle these forms via function-like bindings in some internal table keyed on symbols, but it's not the program-visible binding namespace.
Setting up these associations in the regular namespace isn't necessarily wrong, but it could be problematic. If you want both a compiler and interpreter, but let has only one top-level binding, that will be an issue: who gets to install their procedure into that binding: the interpreter or compiler? (One way to resolve that is simple: make the binding values cons pairs: the car can be the interpreter function, the cdr the compiler function. But then these bindings are not procedures any more that you can apply.)
Exposing these bindings to the application is problematic anyway, because the semantics is so different between interpretation and compilation. If your interpretation is interpreted, then calling the define binding as a function is possible; it has the effect of performing the definition. But in a compiled interpretation, code depending on this won't work; define will be a function that doesn't actually define anything, but rather compiles: it calculates and returns a compiled fragment written in some intermediate representation.
About your implementation, the fact that (apply define (quote (a 1))) works in your implementation raises a bit of a red flag. Either you've made the environment parameter of the function optional, or it doesn't take one. Functions implementing special operators (or "syntax") need an environment parameter, not just the piece of syntax. (At least if we are developing a lexically scoped Scheme or Lisp!)
The fact that (apply define (quote (a 1))) works also suggests that your define function is taking quote and (a 1) as arguments. While that is workable, the usual approach for these kinds of syntax procedures is to take the whole form as one argument (and a lexical environment as another argument). If such a function can be called, the invocation looks something like like (apply define (list '(define a 1) (null-environment 5))). The procedure itself will do any necessary destructuring on the syntax, and checking for validity: are there too many or too few parameters and so on.
Is it possible to do this in one function:
(binding [*configs* (merge default-configs configs)]
(let [{:keys [login url max-pages]} *configs*]
..
When I tried this:
(binding [{:keys [login url max-pages] :as *configs*} (merge default-configs configs)]
It gave me this error:
CompilerException java.lang.ClassCastException: clojure.lang.PersistentArrayMap cannot be cast to clojure.lang.Symbol
A little googling showed me that Common Lisp has a function called destructure-bind but I'm not sure if that's related.
No, nothing like this will work with core macros.
The reason is that both binding and let (and friends, e.g. with-bindings) do just one thing. In the case of binding, that thing is installing thread-local bindings for Vars; for let, it is introducing local bindings. These are entirely different operations.
In let, destructuring has a clear meaning: it introduces new locals, which is exactly what the basic, non-destructuring let bindings do. This is also clearly useful, as prying appart data structures and binding different parts to different locals is a common need. The names of the locals are also locally determined, so things like :keys in associative destructuring work well.
In binding, to be consistent with its main purpose, destructuring would need to bind several Vars simultaneously to several parts of a data structure. This is not nearly as useful. If instead destructuring in binding were to introduce locals, then all of a sudden binding would do two unrelated things, possibly both in the same binding pair (note how the failing binding form from the question text expects the bindings introduced by :keys to be locals, but the binding made by :as to be the usual thread-local binding of a Var). Thus binding simply opts not to support destructuring. (If you need to bind parts of a data structure to several Vars, you can use a let to perform the destructuring, then install the bindings with binding.)
As for destructuring-bind, it's basically the destructuring-enabled version of let in Common Lisp. CL's let does not support destructuring.
"Binding Forms (Destructuring)" section:
Clojure supports abstract structural binding, often called
destructuring, in let binding lists, fn parameter lists, and any macro
that expands into a let or fn. ...
AFAIK binding itself doesn't use destructuring mechanism (via fn of let).
Learning Clojure I came across code like below:
=> (defrecord Person [name, age])
user.Person
=> (->Person "john" 40)
#user.Person{:name "john", :age 40}
=> (Person. "tom" 30)
#user.Person{:name "tom", :age 30}
the question is, what does the leading arrow(i.e., ->) in the ->Person mean ? It's a reader macro or what ? I see no description of it in the reader section of clojuredoc. Further, what is the difference between ->Person and Person. ?
It has no syntactic meaning. It is just part of the symbol name. In Lisps, the arrow -> (or even just '>') is often used to imply conversion of, or casting of, one type into another. In the macro expansion of the defrecord:
(macroexpand '(defrecord Person [name age]))
you can see that it defines ->Person as a function that calls the Person constructor. ->Person (the function) may be more convenient for you to use than Person. (the direct call to the Java constructor) as you can pass it as an argument to other functions, capture it in a variable and use it, etc:
(let [f ->Person]
(f "Bob" 65))
Compare that to:
(let [f Person.]
(f "Bob" 65))
Which is syntactically invalid.
Obviously not in your specific example, but in the general context this operator -> is called the threading operator and is considered one of the thrush operators. Along with its cousin the ->> operator, these operators are quite useful when you need to have code appear more clearly, especially when feeding the output of functions as parameters to another function. Both operators are macros. Here's another SO post concerning both operators.
I have not yet had cause to use the ->, but did need ->> to make sense of code that needed to compute intermediate values and put them all into one function call.
Here is another explanation.
Note that I'm not talking about ear muffs in symbol names, an issue that is discussed at Conventions, Style, and Usage for Clojure Constants? and How is the `*var-name*` naming-convention used in clojure?. I'm talking strictly about instances where there is some function named foo that then calls a function foo*.
In Clojure it basically means "foo* is like foo, but somehow different, and you probably want foo". In other words, it means that the author of that code couldn't come up with a better name for the second function, so they just slapped a star on it.
Mathematicians and Haskellers can use their apostrophes to indicate similar objects (values or functions). Similar but not quite the same. Objects that relate to each other. For instance, function foo could be a calculation in one manner, and foo' would do the same result but with a different approach. Perhaps it is unimaginative naming but it has roots in mathematics.
Lisps generally (without any terminal reason) have discarded apostrophes in symbol names, and * kind of resembles an apostrophe. Clojure 1.3 will finally fix that by allowing apostrophes in names!
If I understand your question correctly, I've seen instances where foo* was used to show that the function is equivalent to another in theory, but uses different semantics. Take for instance the lamina library, which defines things like map*, filter*, take* for its core type, channels. Channels are similar enough to seqs that the names of these functions make sense, but they are not compatible enough that they should be "equal" per se.
Another use case I've seen for foo* style is for functions which call out to a helper function with an extra parameter. The fact function, for instance, might delegate to fact* which accepts another parameter, the accumulator, if written recursively. You don't necessarily want to expose in fact that there's an extra argument, because calling (fact 5 100) isn't going to compute for you the factorial of 5--exposing that extra parameter is an error.
I've also seen the same style for macros. The macro foo expands into a function call to foo*.
a normal let binding (let ((...))) create separate variables in parallel
a let star binding (let* ((...))) creates variables sequentially so that can be computed from eachother like so
(let* ((x 10) (y (+ x 5)))
I could be slightly off base but see LET versus LET* in Common Lisp for more detail
EDIT: I'm not sure about how this reflects in Clojure, I've only started reading Programming Clojure so I don't know yet
I know that this may sound like blasphemy to Lisp aficionados (and other lovers of dynamic languages), but how difficult would it be to enhance the Clojure compiler to support static (compile-time) type checking?
Setting aside the arguments for and against static and dynamic typing, is this possible (not "is this advisable")?
I was thinking that adding a new reader macro to force a compile-time type (an enhanced version of the #^ macro) and adding the type information to the symbol table would allow the compiler to flag places where a variables was misused. For example, in the following code, I would expect a compile-time error (#* is the "compile-time" type macro):
(defn get-length [#*String s] (.length s))
(defn test-get-length [] (get-length 2.0))
The #^ macro could even be reused with a global variable (*compile-time-type-checking*) to force the compiler the do the checks.
Any thoughts on the feasibility?
It certain possible. However I do not think that Clojure will ever get any form of weak static typing - it's benefits are too few.
Rich Hickey has however expressed on several occasions his like for the strong, optional, and expressive typing feature of the Qi language, http://www.lambdassociates.org/qilisp.htm
It's certainly possible. The compiler already does some static type checking around primitive argument types in the 1.3 development branch.
Yes! It looks like there is a project underway, core.typed, to make optional static type checking a reality. See the Github project and its
documentation
This work grew out of an undergraduate honours dissertation (PDF) by Ambrose Bonnaire-Sergeant, and is related to the Typed Racket system.
Since one form is read AND evaluated at a time you cannot have forward references making this somewhat limited.
Old question but two important points: I don't think Clojure supports reader macros, only ordinary lisp macros. And now we have core.typed option for typing in Clojure.
declare can have type hints, so it is possible to declare a var that "is" the type which has not been defined yet but contains data about the structure, but this would be really clunky and you would have to do it before any code path that could be executed before the type is defined. Basically, you would want to define all of your user defined types up front and then use them like normal. I think that makes library writing somewhat hackish.
I didn't mean to suggest earlier that this isn't possible, just that for user defined types it is a lot more complicated than for pre-defined types. The benefit of doing this vs. the cost is something that should be seriously considered. But I encourage anyone who is interested to try it out and see if they can make it work!