Quoting in clojure results in non-evaluation. ':a and :a return the same result. What is the difference between ':a and :a ? One is not evaluated and other evaluates to itself... but is this same as non-evaluation ?
':a is shorthand for (quote :a).
(eval '(quote form)) returns form by definition. That is to say, if the Clojure function eval receives as its argument a list structure whose first element is the symbol quote, it returns the second element of said list structure without transforming it in any way (thus it is said that the quoted form is not evaluated). In other words, the behaviour eval dispatches to when its argument is a list structure of the form (quote foo) is that of returning foo unchanged, regardless of what it is.
When you write down the literal :a in your programme, it gets read in as the keyword :a; that is, the concrete piece of text :a gets converted to an in-memory data structure which happens to be called the :a keyword (Lisp being homoiconic means that occasionally it is hard to distinguish between the textual representation of Lisp data and the data itself, even when this would be useful for explanatory purposes...).
The in-memory data structure corresponding to the literal :a is a Java object which exposes a number of methods etc. and which has the interesting property that the function eval, when it receives this data object as an argument, returns it unchanged. In other words, the keyword's "evaluation to itself" which you ask about is just the behaviour eval dispatches to when passed in a keyword as an argument.
Thus when eval sees ':a, it treats it as a quoted form and returns the second part thereof, which happens to be :a. When, on the other hand, eval sees :a, it treats it as a keyword and returns it unchanged. The return value is the same in both cases (it's just the keyword :a); the evaluation process is slightly different.
Clojure semantics -- indeed Lisp semantics, for any dialect of Lisp -- are specified in terms of the values returned by and side-effects caused by the function eval when it receives various Lisp data structures as arguments. Thus the above explains what's actually meant to happen when you write down ':a or :a in your programme (code like (println :a) may get compiled into efficient bytecode which doesn't actually code the function eval, of course; but the semantics are always preserved, so that it still acts as if it was eval receiving a list structure containing the symbol println and the keyword :a).
The key idea here is that regardless of whether the form being evaluated is ':a or :a, the keyword data structure is constructed at read time; then when one of these forms is evaluated, that data structure is returned unchanged -- although for different reasons.
Related
Macros do not evaluate their arguments until explicitly told to do so, however functions do. In the following code:
(defmacro foo [xs]
(println xs (type xs)) ;; unquoted list
(blah xs))
(defn blah [xs] ;; xs is unquoted list, yet not evaluated
(println xs)
xs)
(foo (+ 1 2 3))
It seems that blah does not evaluate xs, since we still have the entire list: (+ 1 2 3) bound to xs in the body of blah.
I have basically just memorized this interaction between helper functions within macros and their evaluation of arguments, but to be honest it goes against what my instincts are (that xs would be evaluated before entering the body since function arguments are always evaluated).
My thinking was basically: "ok, in this macro body I have xs as the unevaluated list, but if I call a function with xs from within the macro it should evaluate that list".
Clearly I have an embarassingly fundamental misunderstanding here of how things work. What am I missing in my interpretation? How does the evaluation actually occur?
EDIT
I've thought on this a bit more and it seems to me that maybe viewing macro arguments as "implicitly quoted" would solve some confusion on my part.
I think I just got mixed up in the various terminologies, but given that quoted forms are synonymous with unevaluated forms, and given macro arguments are unevaluated, they are implicitly quoted.
So in my above examples, saying xs is unquoted is somewhat misleading. For example, this macro:
(defmacro bluh [xs]
`(+ 1 2 ~xs))
Is basically the same as the below macro (excluding namespacing on the symbols). Resolving xs in the call to list gives back an unevaluated (quoted?) list.
(defmacro bleh [xs]
(list '+ '1 '2 xs)) ;; xs resolves to a quoted list (or effectively quoted)
Calling bleh (or bluh) is the same as saying:
(list '+ '1 '2 '(+ 1 2 3))
;; => (+ 1 2 (+ 1 2 3))
If xs did not resolve to a quoted list, then we would end up with:
(list '+ '1 '2 (+ 1 2 3))
;; => (+ 1 2 6)
So, in short, macro arguments are quoted.
I thnk part of my confusion came from thinking about the syntax quoted forms as templates with slots filled in e.g. (+ 1 2 ~xs) I would mentally expand to (+ 1 2 (+ 1 2 3)), and seeing that (+ 1 2 3) was not quoted in that expansion, I found it confusing that function calls using xs (in the first example above blah) would not evalute immediately to 6.
The template metaphor is helpful, but if I instead look at it as a
shortcut for (list '+ '1 '2 xs) it becomes obvious that xs must be a quoted list otherwise the expansion would include 6 and not the entire list.
I'm not sure why I found this so confusing... have I got this right or did I just go down the wrong path entirely?
[This answer is an attempt to explain why macros and functions which don't evaluate their arguments are different things. I believe this applies to macros in Clojure but I am not an expert on Clojure. It's also much too long, sorry.]
I think you are confused between what Lisp calls macros and a construct which modern Lisps don't have but which used to be called FEXPRs.
There are two interesting, different, things you might want:
functions which, when called, do not immediately evaluate their arguments;
syntax transformers, which are called macros in Lisp.
I'll deal with them in order.
Functions which do not immediately evaluate their arguments
In a conventional Lisp, a form like (f x y ...), where f is a function, will:
establish that f is a function and not some special thing;
get the function corresponding to f and evaluate x, y, & the rest of the arguments in some order specified by the language (which may be 'in an unspecified order');
call f with the results of evaluating the arguments.
Step (1) is needed initially because f might be a special thing (like, say if, or quote), and it might be that the function definition is retrieved in (1) as well: all of this, as well as the order that things happen in in (2) is something the language needs to define (or, in the case of Scheme say, leave explicitly undefined).
This ordering, and in particular the ordering of (2) & (3) is known as applicative order or eager evaluation (I'll call it applicative order below).
But there are other possibilities. One such is that the arguments are not evaluated: the function is called, and only when the values of the arguments are needed are they evaluated. There are two approaches to doing this.
The first approach is to define the language so that all functions work this way. This is called lazy evaluation or normal order evaluation (I'll call it normal order below). In a normal order language function arguments are evaluated, by magic, at the point they are needed. If they are never needed then they may never be evaluated at all. So in such a language (I am inventing the syntax for function definition here so as not to commit CL or Clojure or anything else):
(def foo (x y z)
(if x y z))
Only one of y or z will be evaluated in a call to foo.
In a normal order language you don't need to explicitly care about when things get evaluated: the language makes sure that they are evaluated by the time they're needed.
Normal order languages seem like they'd be an obvious win, but they tend to be quite hard to work with, I think. There are two problems, one obvious and one less so:
side-effects happen in a less predictable order than they do in applicative order languages and may not happen at all, so people used to writing in an imperative style (which is most people) find them hard to deal with;
even side-effect-free code can behave differently than in an applicative order language.
The side-effect problem could be treated as a non-problem: we all know that code with side-effects is bad, right, so who cares about that? But even without side-effects things are different. For instance here's a definition of the Y combinator in a normal order language (this is kind of a very austere, normal order subset of Scheme):
(define Y
((λ (y)
(λ (f)
(f ((y y) f))))
(λ (y)
(λ (f)
(f ((y y) f))))))
If you try to use this version of Y in an applicative order language -- like ordinary Scheme -- it will loop for ever. Here's the applicative order version of Y:
(define Y
((λ (y)
(λ (f)
(f (λ (x)
(((y y) f) x)))))
(λ (y)
(λ (f)
(f (λ (x)
(((y y) f) x)))))))
You can see it's kind of the same, but there are extra λs in there which essentially 'lazify' the evaluation to stop it looping.
The second approach to normal order evaluation is to have a language which is mostly applicative order but in which there is some special mechanism for defining functions which don't evaluate their arguments. In this case there often would need to be some special mechanism for saying, in the body of the function, 'now I want the value of this argument'. Historically such things were called FEXPRs, and they existed in some very old Lisp implementations: Lisp 1.5 had them, and I think that both MACLISP & InterLisp had them as well.
In an applicative order language with FEXPRs, you need somehow to be able to say 'now I want to evaluate this thing', and I think this is the problem are running up against: at what point does the thing decide to evaluate the arguments? Well, in a really old Lisp which is purely dynamically scoped there's a disgusting hack to do this: when defining a FEXPR you can just pass in the source of the argument and then, when you want its value, you just call EVAL on it. That's just a terrible implementation because it means that FEXPRs can never really be compiled properly, and you have to use dynamic scope so variables can never really be compiled away. But this is how some (all?) early implementations did it.
But this implementation of FEXPRs allows an amazing hack: if you have a FEXPR which has been given the source of its arguments, and you know that this is how FEXPRs work, then, well, it can manipulate that source before calling EVAL on it: it can call EVAL on something derived from the source instead. And, in fact, the 'source' it gets given doesn't even need to be strictly legal Lisp at all: it can be something which the FEXPR knows how to manipulate to make something that is. That means you can, all of a sudden, extend the syntax of the language in pretty general ways. But the cost of being able to do that is that you can't compile any of this: the syntax you construct has to be interpreted at runtime, and the transformation happens each time the FEXPR is called.
Syntax transformers: macros
So, rather than use FEXPRs, you can do something else: you could change the way that evaluation works so that, before anything else happens, there is a stage during which the code is walked over and possibly transformed into some other code (simpler code, perhaps). And this need happen only once: once the code has been transformed, then the resulting thing can be stashed somewhere, and the transformation doesn't need to happen again. So the process now looks like this:
code is read in and structure built from it;
this initial structure is possibly transformed into other structure;
(the resulting structure is possibly compiled);
the resulting structure, or the result of compiling it is evaluated, probably many times.
So now the process of evaluation is divided into several 'times', which don't overlap (or don't overlap for a particular definition):
read time is when the initial structure is built;
macroexpansion time is when it is transformed;
compile time (which may not happen) is when the resulting thing is compiled;
evaluation time is when it is evaluated.
Well, compilers for all languages probably do something like this: before actually turning your source code into something that the machine understands they will do all sorts of source-to-source transformations. But these things are in the guts of the compiler and are operating on some representation of the source which is idiosyncratic to that compiler and not defined by the language.
Lisp opens this process to users. The language has two features which make this possible:
the structure that is created from source code once it has been read is defined by the language and the language has a rich set of tools for manipulating this structure;
the structure created is rather 'low commitment' or austere -- it does not particularly predispose you to any interpretation in many cases.
As an example of the second point, consider (in "my.file"): that's a function call of a function called in, right? Well, may be: (with-open-file (in "my.file") ...) almost certainly is not a function call, but binding in to a filehandle.
Because of these two features of the language (and in fact some others I won't go into) Lisp can do a wonderful thing: it can let users of the language write these syntax-transforming functions -- macros -- in portable Lisp.
The only thing that remains is to decide how these macros should be notated in source code. And the answer is the same way as functions are: when you define some macro m you use it just as (m ...) (some Lisps support more general things, such as CL's symbol macros. At macroexpansion time -- after the program is read but before it is (compiled and) run -- the system walks over the structure of the program looking for things which have macro definitions: when it finds them it calls the function corresponding to the macro with the source code specified by its arguments, and the macro returns some other chunk of source code, which gets walked in turn until there are no macros left (and yes, macros can expand to code involving other macros, and even to code involving themselves). Once this process is complete then the resulting code can be (compiled and) run.
So although macro look like function calls in the code, they are not just functions which don't evaluate their arguments, like FEXPRs were: instead they are functions which take a bit of Lisp source code and return another bit of Lisp source code: they're syntax transformers, or function which operate on source code (syntax) and return other source code. Macros run at macroexpansion time which is properly before evaluation time (see above).
So, in fact macros are functions, written in Lisp, and the functions they call evaluate their arguments perfectly conventionally: everything is perfectly ordinary. But the arguments to macros are programs (or the syntax of programs represented as Lisp objects of some kind) and their results are (the syntax of) other programs. Macros are functions at the meta-level, if you like. So a macro if a function which computes (parts of) programs: those programs may later themselves be run (perhaps much later, perhaps never) at which point the evaluation rules will be applied to them. But at the point a macro is called what it's dealing with is just the syntax of programs, not evaluating parts of that syntax.
So, I think your mental model is that macros are something like FEXPRs in which case the 'how does the argument get evaluated' question is an obvious thing to ask. But they're not: they're functions which compute programs, and they run properly before the program they compute is run.
Sorry this answer has been so long and rambling.
What happened to FEXPRs?
FEXPRs were always pretty problematic. For instance what should (apply f ...) do? Since f might be a FEXPR, but this can't generally be known until runtime it's quite hard to know what the right thing to do is.
So I think that two things happened:
in the cases where people really wanted normal order languages, they implemented those, and for those languages the evaluation rules dealt with the problems FEXPRs were trying to deal with;
in applicative order languages then if you want to not evaluate some argument you now do it by explicitly saying that using constructs such as delay to construct a 'promise' and force to force evaluation of a promise -- because the semantics of the languages improved it became possible to implement promises entirely in the language (CL does not have promises, but implementing them is essentially trivial).
Is the history I've described correct?
I don't know: I think it may be but it may also be a rational reconstruction. I certainly, in very old programs in very old Lisps, have seen FEXPRs being used the way I describe. I think Kent Pitman's paper, Special Forms in Lisp may have some of the history: I've read it in the past but had forgotten about it until just now.
A macro definition is a definition of a function that transforms code. The input for the macro function are the forms in the macro call. The return value of the macro function will be treated as code inserted where the macro form was. Clojure code is made of Clojure data structures (mostly lists, vectors, and maps).
In your foo macro, you define the macro function to return whatever blah did to your code. Since blah is (almost) the identity function, it just returns whatever was its input.
What is happening in your case is the following:
The string "(foo (+ 1 2 3))" is read, producing a nested list with two symbols and three integers: (foo (+ 1 2 3)).
The foo symbol is resolved to the macro foo.
The macro function foo is invoked with its argument xs bound to the list (+ 1 2 3).
The macro function (prints and then) calls the function blah with the list.
blah (prints and then) returns that list.
The macro function returns the list.
The macro is thus “expanded” to (+ 1 2 3).
The symbol + is resolved to the addition function.
The addition function is called with three arguments.
The addition function returns their sum.
If you wanted the macro foo to expand to a call to blah, you need to return such a form. Clojure provides a templating convenience syntax using backquote, so that you do not have to use list etc. to build the code:
(defmacro foo [xs]
`(blah ~xs))
which is like:
(defmacro foo [xs]
(list 'blah xs))
(defn explain-defcon-level [exercise-term]
(case exercise-term
:fade-out :you-and-what-army
:double-take :call-me-when-its-important
:round-house :o-rly
:fast-pace :thats-pretty-bad
:cocked-pistol :sirens
:say-what?))
If I understand correctly, usually key has colon and value does not.
What is the purpose here?
Thanks.
Words starting with : are keywords. Keywords act as known values like enums in some languages. You could also use strings as you might do in Python or JavaScript, but keywords have some nice features.
In this case, if the function receives e.g. the known keyword :round-house, it will return the known value :o-rly. Some other code in turn knows what :o-rly means. You might as well use strings, if the code calling explain-defcon-level would expect it to return strings.
The keywords:
Can look themselves up in a map (def m {:abba 1 :beef 2}) .. (:abba m) => 1
Are easy to stringify (name :foo) => "foo"
Are easy to make from strings (keyword "bar") => :bar
Are performant: fast to compare and don't waste memory
: is just a reader form for creating keywords.
Keywords in clojure are data structures, just like symbols, strings or numbers.
Keywords are used as keys in hashmaps because they implement IFn for invoke(), allowing you to write things like (:mykey my-map) instead of (my-map :mykey) or (get my-map :mykey).
Generally, you could use any data structure as a hashmap key:
(def my-map
{ {:foo :bar} 1
{:bar :baz} 2 })
(my-map {:foo :bar}) ; 1
You can think of clojure code as consisting of a lot of symbols in s-expressions. When the reader parses these expressions, it has to resolve the symbol to a value. Some of these symbols are self evaluating i.e they will evaluate to themselves. for example, the symbol "Hello world" is a string symbol, which evaluates to itself. the number 123 is also self-evaluating and will evaluate to the number 123.
Other symbols need to be bound to a value. If you just had the symbol fred it needs to be bound to some value i.e. (def fred "Hello world"). If the symbol is the first symbol in a list (s-expression) it must evaluate to a function i.e. (def fred (fn ....) or the shorthand (defn fred [] ....). Note, this is simplifying things a little as you also have macros, but they can be ignored for now - they are special, in fact, often referred to as special forms.
Apart from strings and numbers, there is another very useful self-evaluating symbol, the keyword. It is distinguished by the leading ':'. Keywords evaluate to themselves i.e. :fred evaluates to :fred.
Keywords also have some very nice properties, such as fast comparison and efficient use of space. They are also useful when you want a symbol that represents something, but don't want to have to define (bind) it to something before you use it. In hash maps, keywords are also a function of the hash map, so you can do things like (:mykey maymap) instead of (get mymay :mykey).
In general, every symbol must evaluate to a value and the first symbol in a non-quoted list must evaluate to a function. You can quote lists and symbols, which essentially says "don't evaluate me at this point".
With that in mind, you can use any of these symbols as a value in a clojure data structure i.e. you can have vectors of functions, keywords, strings, numbers etc.
In the example you provide, you want your case statement to return some sort of symbol which you can then presumably use to make some decision later in your program. You could define the return value as a string, such as "You and whose army" and then later compare it to some other string to make a decision. However, You could even make things a bit more robust by defining a binding like
(def a "You and whose army")
and then do things like
(= a return-val)
but it isn't really buying you anything. It requires more typing, quoting, memory and is slower for comparison.
Keywords are often really useful when you are just playing around at the repl and you want to just test some ideas. Instead of having to write something like ["a" "b" "cc"], you can just write [:a :b :c].
I'm just getting started with Clojure and I have no fp experience but the first thing that I've noticed is a heavy emphasis on immutability. I'm a bit confused by the emphasis, however. It looks like you can re-def global variables easily, essentially giving you a way to change state. The most significant difference that I can see is that function arguments are passed by value and can't be re-def(ined) within the function. Here's a repl snippet that shows what I mean:
towers.core=> (def a "The initial string")
#'towers.core/a
towers.core=> a
"The initial string"
towers.core=> (defn mod_a [aStr]
#_=> (prn aStr)
#_=> (prn a)
#_=> (def aStr "A different string")
#_=> (def a "A More Different string")
#_=> (prn aStr)
#_=> (prn a))
#'towers.core/mod_a
towers.core=> a
"The initial string"
towers.core=> (mod_a a)
"The initial string"
"The initial string"
"The initial string"
"A More Different string"
nil
towers.core=> a
"A More Different string"
If I begin my understanding of immutability in clojure by thinking of it as pass-by-value, what am I missing?
Call-by-value and immutability are two entirely distinct concepts. Indeed, one of the advantages of variable immutability is that such variables could be passed by name or reference without any effect on programme behaviour.
In short: don't think of them as linked.
generally very little is "def"d in a clojure script/class, it's mostly used for generating values that are used outside of the class. instead values are created in let bindings as you need them in your methods.
def is used to define vars, as stated in Clojure Programming:
top level functions and values are all stored in vars, which are
defined within the current namespace using the def special form or one
of its derivatives.
Your use of def inside a function isn't making a local variable, it's creating a new global var, and you're effectively replacing the old reference with a new one each time.
When you move onto using let, you'll see how immutability works, for instance using things like seqs which can be used over without penalty of something else having also read them (like an iteration over a list would in java for instance), e.g.
(let [myseq (seq [1 2 3 4 5])
f (first myseq)
s (second myseq)
sum (reduce + myseq)]
(println f s sum))
;; 1 2 15
As you can see, it doesn't matter that (first myseq) has "taken" an item from the sequence. because the sequence myseq is immutable, it's still the same, and unaffected by the operations on it. Also, notice that there isn't a single def in the code above, the assignment happened in the let bindings where the values myseq, f, s and sum were created (and are immutable within the rest of the sexp).
Yes, immutability is different from pass-by-value, and you've missed a couple of important details of what's going on in your examples:
value mutation versus variable re-binding. Your code exemplifies re-binding, but doesn't actually mutate values.
shadowing. Your local aStr shadows your global aStr, so you can't see the global one -- although it's still there -- so there's no difference between the effects of (def a ...) and (def aStr ...) here. You can verify that the global is created after running your function.
A final point: Clojure doesn't force you to be purely functional -- it has escape hatches, and it's up to you to use them responsibly. Rebinding variables is one of those escape hatches.
just a note that technically Java, and by extension Clojure (on the JVM) is strictly pass by value. In many cases the thing passed is a reference to a structure that others may be reading, though because it is immutable nobody will be changing out from under you. The important point being that mutability and immutability happen after you pass the reference to something so, and Marcin points out they really are distinct.
I think of much of the immutability in Clojure as residing in (most of) the built-in data structure types and (most of) the functions that allow manipulating ... uh, no, modifying ... no, really, constructing new data structures from them. There are array-like things, but you can't modify them, there are lists, but you can't modify them, there are hash maps, but you can't modify them, etc., and the standard tools for using them actually create new data structures even when they look, to a novice, as if they're performing in-place modifications. And all of that does add up to a big difference.
I'm learning Clojure and trying to understand reader, quoting, eval and homoiconicity by drawing parallels to Python's similar features.
In Python, one way to avoid (or postpone) evaluation is to wrap the expression between quotes, eg. '3 + 4'. You can evaluate this later using eval, eg. eval('3 + 4') yielding 7. (If you need to quote only Python values, you can use repr function instead of adding quotes manually.)
In Lisp you use quote or ' for quoting and eval for evaluating, eg. (eval '(+ 3 4)) yielding 7.
So in Python the "quoted" stuff is represented by a string, whereas in Lisp it's represented by a list which has quoteas first item.
My question, finally: why does Clojure allow (eval 3) although 3 is not quoted? Is it just the matter of Lisp style (trying to give an answer instead of error wherever possible) or are there some other reasons behind it? Is this behavior essential to Lisp or not?
The short answer would be that numbers (and symbols, and strings, for example) evaluate to themselves. Quoting instruct lisp (the reader) to pass unevaluated whatever follows the quote. eval then gets that list as you wrote it, but without the quote, and then evaluates it (in the case of (eval '(+ 3 4)), eval will evaluate a function call (+) over two arguments).
What happens with that last expression is the following:
When you hit enter, the expression is evaluated. It contain a normal function call (eval) and some arguments.
The arguments are evaluated. The first argument contains a quote, which tells the reader to produce what is after the quote (the actual (+ 3 4) list).
There are no more arguments, and the actual function call is evaluated. This means calling the eval function with the list (+ 3 4) as argument.
The eval function does the same steps again, finding the normal function + and the arguments, and applies it, obtaining the result.
Other answers have explained the mechanics, but I think the philosophical point is in the different ways lisp and python look at "code". In python, the only way to represent code is as a string, so of course attempting to evaluate a non-string will fail. Lisp has richer data structures for code: lists, numbers, symbols, and so forth. So the expression (+ 1 2) is a list, containing a symbol and two numbers. When evaluating a list, you must first evaluate each of its elements.
So, it's perfectly natural to need to evaluate a number in the ordinary course of running lisp code. To that end, numbers are defined to "evaluate to themselves", meaning they are the same after evaluation as they were before: just a number. The eval function applies the same rules to the bare "code snippet" 3 that the compiler would apply when compiling, say, the third element of a larger expression like (+ 5 3). For numbers, that means leaving it alone.
What should 3 evaluate to? It makes the most sense that Lisp evaluates a number to itself. Would we want to require numbers to be quoted in code? That would not be very convenient and extremely problematic:
Instead of
(defun add-fourtytwo (n)
(+ n 42))
we would have to write
(defun add-fourtytwo (n)
(+ n '42))
Every number in code would need to be quoted. A missing quote would trigger an error. That's not something one would want to use.
As a side note, imagine what happens when you want to use eval in your code.
(defun example ()
(eval 3))
Above would be wrong. Numbers would need to be quoted.
(defun example ()
(eval '3))
Above would be okay, but generating an error at runtime. Lisp evaluates '3 to the number 3. But then calling eval on the number would be an error, since they need to be quoted.
So we would need to write:
(defun example ()
(eval ''3))
That's not very useful...
Numbers have be always self-evaluating in Lisp history. But in earlier Lisp implementations some other data objects, like arrays, were not self-evaluating. Again, since this is a huge source of errors, Lisp dialects like Common Lisp have defined that all data types (other than lists and symbols) are self-evaluating.
To answer this question we need to look at eval definition in lisp. E.g. in CLHS there is definition:
Syntax: eval form => result*
Arguments and Values:
form - a form.
results - the values yielded by the evaluation of form.
Where form is
any object meant to be evaluated.
a symbol, a compound form, or a self-evaluating object.
(for an operator, as in <<operator>> form'') a compound form having that operator as its first element.A quote form is a
constant form.''
In your case number "3" is self-evaluating object. Self-evaluating object is a form that is neither a symbol nor a cons is defined to be a self-evaluating object. I believe that for clojure we can just replace cons by list in this definition.
In clojure only lists are interpreted by eval as function calls. Other data structures and objects are evaluated as self-evaluating objects.
'(+ 3 4) is equal to (list '+ 3 4). ' (transformed by reader to quote function) just avoid evaluation of given form. So in expression (eval '(+ 3 4)) eval takes list data structure ('+ 3 4) as argument.
I'm wondering about the purpose, or perhaps more correctly, the tasks of the "reader" during interpretation/compilation of Lisp programs.
From the pre-question-research I've just done, it seems to me that a reader (particular to Clojure in this case) can be thought of as a "syntactic preprocessor". It's main duties are the expansion of reader macros and primitive forms. So, two examples:
'cheese --> (quote cheese)
{"a" 1 "b" 2} --> (array-map "a" 1 "b" 2)
So the reader takes in the text of a program (consisting of S-Expressions) and then builds and returns an in-memory data-structure that can be evaluated directly.
How far from the truth is this (and have I over-simplified the whole process)? What other tasks does the reader perform? Considering a virtue of Lisps is their homoiconicity (code as data), why is there a need for lexical analysis (if such is indeed comparable to the job of the reader)?
Thanks!
Generally the reader in Lisp reads s-expressions and returns data structures. READ is an I/O operation: Input is a stream of characters and output is Lisp data.
The printer does the opposite: it takes Lisp data and outputs those as a stream of characters. Thus it can also print Lisp data to external s-expressions.
Note that interpretation means something specific: executing code by an Interpreter. But many Lisp systems (including Clojure) are using a compiler. The tasks of computing a value for a Lisp form is usually called evaluation. Evaluation can be implemented by interpretation, by compilation or by a mix of both.
S-Expression: symbolic expressions. External, textual representation of data. External means that s-expressions are what you see in text files, strings, etc. So s-expressions are made of characters on some, typically external, medium.
Lisp data structures: symbols, lists, strings, numbers, characters, ...
Reader: reads s-expressions and returns Lisp data structures.
Note that s-expressions also are used to encode Lisp source code.
In some Lisp dialects the reader is programmable and table driven (via the so-called read table). This read table contains reader functions for characters. For example the quote ' character is bound to a function that reads an expression and returns the value of (list 'quote expression). The number characters 0..9 are bound to functions that read a number (in reality this might be more complex, since some Lisps allow numbers to be read in different bases).
S-expressions provide the external syntax for data structures.
Lisp programs are written in external form using s-expressions. But not all s-expressions are valid Lisp programs:
(if a b c d e) is usually not a valid Lisp program
the syntax of Lisp usually is defined on top of Lisp data.
IF has for example the following syntax (in Common Lisp http://www.lispworks.com/documentation/HyperSpec/Body/s_if.htm ):
if test-form then-form [else-form]
So it expects a test-form, a then-form and an optional else-form.
As s-expressions the following are valid IF expressions:
(if (foo) 1 2)
(if (bar) (foo))
But since Lisp programs are forms, we can also construct these forms using Lisp programs:
(list 'if '(foo) 1 2) is a Lisp program that returns a valid IF form.
CL-USER 24 > (describe (list 'if '(foo) 1 2))
(IF (FOO) 1 2) is a LIST
0 IF
1 (FOO)
2 1
3 2
This list can for example be executed with EVAL. EVAL expects list forms - not s-expressions. Remember s-expressions are only an external representation. To create a Lisp form, we need to READ it.
This is why it is said code is data. Lisp forms are expressed as internal Lisp data structures: lists, symbols, numbers, strings, .... In most other programming languages code is raw text. In Lisp s-expressions are raw text. When read with the function READ, s-expressions are turned into data.
Thus the basic interaction top-level in Lisp is called REPL, Read Eval Print Loop.
It is a LOOP that repeatedly reads an s-expression, evaluates the lisp form and prints it:
READ : s-expression -> lisp data
EVAL : lisp form -> resulting lisp data
PRINT: lisp data -> s-expression
So the most primitive REPL is:
(loop (print (eval (read))))
Thus from a conceptual point of view, to answer your question, during evaluation the reader does nothing. It is not involved in evaluation. Evaluation is done by the function EVAL. The reader is invoked by a call to READ. Since EVAL uses Lisp data structures as input (and not s-expressions) the reader is run before the Lisp form gets evaluated (for example by interpretation or by compiling and executing it).