Clojure let vs Common Lisp let - clojure

In Common Lisp, the let uses a list for a bindings, i.e:
(let ((var1 1)
(var2 2))
...)
While Clojure uses a vector instead:
(let [a 1
b 2]
...)
Is there any specific reason, other than readability, for Clojure to use a vector?

You can find Rich Hickey's argument at Simple Made Easy - slide 14, about 26 minutes in:

Rich's line on this was as follows
"Since we were talking about syntax, let’s look at
classic Lisp. It seems to be the simplest of syntax, everything is a
parenthesized list of symbols, numbers, and a few other things. What
could be simpler? But in reality, it is not the simplest, since to
achieve that uniformity, there has to be substantial overloading of
the meaning of lists. They might be function calls, grouping
constructs, or data literals, etc. And determining which requires
using context, increasing the cognitive load when scanning code to
assess its meaning. Clojure adds a couple more composite data literals
to lists, and uses them for syntax. In doing so, it means that lists
are almost always call-like things, and vectors are used for grouping,
and maps have their own literals. Moving from one data structure to
three reduces the cognitive load substantially."
One of the things he believes was overloaded in the standard syntax was access time. So vector syntax in arguments is related to the constant access time when you used them. He said:
Seems odd though as it only is valid for that one form...as soon as it is stored in a variable or passed in any way the information is 'lost'. For example...
(defn test [a]
(nth a 0)) ;;<- what is the access time of and element of a?
I personally prefer harsh syntax changes like brackets to be reserved for when the programmer has to switch mental models e.g. for embedded languages.
;; Example showing a possible syntax for an embedded prolog.
{size [],0}
{size([H|T],N) :- size(T,N1), N is N1+1}
(size '(1 2 3 4) 'n) ;; here we are back to lisp code
Such a concept is syntactically constant. You don't 'pass around' structure at runtime. Before runtime (read/macro/compile time) is another matter though so where possible it is often better to keep things as lists.
[edit]
The original source seems to have gone, but here is another record of the interview: https://gist.github.com/rduplain/c474a80d173e6ae78980b91bc92f43d1#file-code-quarterly-rich-hickey-2011-md

Related

Is Clojure less homoiconic than other lisps? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
A claim that I recall being repeated in the Clojure for Lisp Programmers videos is that a great weakness of the earlier Lisps, particularly Common Lisp, is that too much is married to the list structure of Lisps, particularly cons cells. You can find one occurrence of this claim just at the 25 minute mark in the linked video, but I'm sure that I can remember hearing it elsewhere in the series. Importantly, at that same point in the video, we see this slide, showing us that Clojure has many other data structures than just the old school Lispy lists:
This troubles me. My knowledge of Lisp is quite limited, but I've always been told that a key element of its legendary metaprogrmmability is that everything - yes, everything - is a list and that this is what prevents the sort of errors that we get when trying to metaprogram other languages. Does this suggest that by adding new first-class data structures, Clojure has reduces its homoiconicity, thus making it more difficult to metaprogram than other Lisps, such as Common Lisp?
is that everything - yes, everything - is a list
that has never been true for Lisp
CL-USER 1 > (defun what-is-it? (thing)
(format t "~%~s is of type ~a.~%" thing (type-of thing))
(format t "It is ~:[not ~;~]a list.~%" (listp thing))
(values))
WHAT-IS-IT?
CL-USER 2 > (what-is-it? "hello world")
"hello world" is of type SIMPLE-TEXT-STRING.
It is not a list.
CL-USER 3 > (what-is-it? #2a((0 1) (2 3)))
#2A((0 1) (2 3)) is of type (SIMPLE-ARRAY T (2 2)).
It is not a list.
CL-USER 4 > (defstruct foo bar baz)
FOO
CL-USER 5 > (what-is-it? #S(foo :bar oops :baz zoom))
#S(FOO :BAR OOPS :BAZ ZOOM) is of type FOO.
It is not a list.
CL-USER 6 > (what-is-it? 23749287349723/840283423)
23749287349723/840283423 is of type RATIO.
It is not a list.
and because Lisp is a programmable programming language, we can add external representations for non-list data types:
Add a primitive notation for a FRAME class.
CL-USER 10 > (defclass frame () (slots))
#<STANDARD-CLASS FRAME 4210359BEB>
The printer:
CL-USER 11 > (defmethod print-object ((o frame) stream)
(format stream "[~{~A~^ ~}]"
(when (and (slot-boundp o 'slots)
(slot-value o 'slots))
(slot-value o 'slots))))
#<STANDARD-METHOD PRINT-OBJECT NIL (FRAME T) 40200011C3>
The reader:
CL-USER 12 > (set-macro-character
#\[
(lambda (stream char)
(let ((slots (read-delimited-list #\] stream))
(o (make-instance 'frame)))
(when slots
(setf (slot-value o 'slots) slots))
o)))
T
CL-USER 13 > (set-syntax-from-char #\] #\))
T
Now we can read/print these objects:
CL-USER 14 > [a b]
[A B]
CL-USER 15 > (what-is-it? [a b])
[A B] is of type FRAME.
It is not a list.
Let's break the word "homoiconicity" apart.
homo-, meaning "same"
iconic, here meaning something like "represented by"
-ity, meaning "something having such a property"
The quality that makes macros possible (or at least much easier) is that the language itself is represented with the same data structures that you use to represent other objects. Note that the word is not "monolistism", or "everything is a list". Since Clojure uses the same data structures to describe its syntax as it does to describe other values at runtime, there's no excuse to claim it is not homoiconic.
Funnily enough, I might say that one of the ways that Clojure is least homoiconic is a feature that it actually shares with older lisps: it uses symbols to represent source code. In other lisps, this is very natural, because symbols are used for lots of other things. Clojure has keywords, though, which are used much more often to give names to data at runtime. Symbols are quite rarely used; I would go so far as to say that their primary usage is for representing source code elements! But all of the other features Clojure uses to represent source code elements are frequently used to represent other things as well: sequences, vectors, maps, strings, numbers, keywords, occasionally sets, and probably some other niche stuff I'm not thinking of now.
In Clojure, the only one of those extra data structures that is required for some language constructs, besides lists, are vectors, and those are in well-known places such as around sequences of arguments to a function, or in symbol/expression pairs of a let. All of them can be used for data literals, but data literals are less often something you want to involve when writing a Clojure macro, as compared to function calls and macro invocations, which are always in lists.
I am not aware of anything in Clojure that makes writing macros more difficult than in Common Lisp, and there are a few features there distinct to Clojure that can make it a little bit easier, such as the behavior where Clojure backquoted expressions by default will namespace-qualify symbols, which is often what you want to prevent 'capturing' a name accidentally.
As previous answerers pointed out, "homoiconicity" means nothing else than that the famous "code is data": That the code of a programming language consists 100% of data structures of that language, thus is easily build-able and manipulateable with the functions in the language usually used to manipulate the data structures.
Since Clojure data structures include lists, vectors, maps ... (those you listed) and since Clojure code is made of lists, vectors, maps ... Clojure code is homoiconic.
The same is true for Common Lisp and other lisps.
Common Lisp contains also lists, vectors, hash-tables etc. .
However, the code of Common Lisp sticks more rigidly to lists than Clojure does (function arguments are in a argument list and not in a vector like in Clojure).
So you can more consistently use functions to manipulate lists for metaprogramming purposes (macros).
Whether this is "more" homoiconic than Clojure is a philosophical question.
But definitely you need a smaller set of functions for manipulation of the code than in Clojure.

Evaluation of arguments in function called by macro

Macros do not evaluate their arguments until explicitly told to do so, however functions do. In the following code:
(defmacro foo [xs]
(println xs (type xs)) ;; unquoted list
(blah xs))
(defn blah [xs] ;; xs is unquoted list, yet not evaluated
(println xs)
xs)
(foo (+ 1 2 3))
It seems that blah does not evaluate xs, since we still have the entire list: (+ 1 2 3) bound to xs in the body of blah.
I have basically just memorized this interaction between helper functions within macros and their evaluation of arguments, but to be honest it goes against what my instincts are (that xs would be evaluated before entering the body since function arguments are always evaluated).
My thinking was basically: "ok, in this macro body I have xs as the unevaluated list, but if I call a function with xs from within the macro it should evaluate that list".
Clearly I have an embarassingly fundamental misunderstanding here of how things work. What am I missing in my interpretation? How does the evaluation actually occur?
EDIT
I've thought on this a bit more and it seems to me that maybe viewing macro arguments as "implicitly quoted" would solve some confusion on my part.
I think I just got mixed up in the various terminologies, but given that quoted forms are synonymous with unevaluated forms, and given macro arguments are unevaluated, they are implicitly quoted.
So in my above examples, saying xs is unquoted is somewhat misleading. For example, this macro:
(defmacro bluh [xs]
`(+ 1 2 ~xs))
Is basically the same as the below macro (excluding namespacing on the symbols). Resolving xs in the call to list gives back an unevaluated (quoted?) list.
(defmacro bleh [xs]
(list '+ '1 '2 xs)) ;; xs resolves to a quoted list (or effectively quoted)
Calling bleh (or bluh) is the same as saying:
(list '+ '1 '2 '(+ 1 2 3))
;; => (+ 1 2 (+ 1 2 3))
If xs did not resolve to a quoted list, then we would end up with:
(list '+ '1 '2 (+ 1 2 3))
;; => (+ 1 2 6)
So, in short, macro arguments are quoted.
I thnk part of my confusion came from thinking about the syntax quoted forms as templates with slots filled in e.g. (+ 1 2 ~xs) I would mentally expand to (+ 1 2 (+ 1 2 3)), and seeing that (+ 1 2 3) was not quoted in that expansion, I found it confusing that function calls using xs (in the first example above blah) would not evalute immediately to 6.
The template metaphor is helpful, but if I instead look at it as a
shortcut for (list '+ '1 '2 xs) it becomes obvious that xs must be a quoted list otherwise the expansion would include 6 and not the entire list.
I'm not sure why I found this so confusing... have I got this right or did I just go down the wrong path entirely?
[This answer is an attempt to explain why macros and functions which don't evaluate their arguments are different things. I believe this applies to macros in Clojure but I am not an expert on Clojure. It's also much too long, sorry.]
I think you are confused between what Lisp calls macros and a construct which modern Lisps don't have but which used to be called FEXPRs.
There are two interesting, different, things you might want:
functions which, when called, do not immediately evaluate their arguments;
syntax transformers, which are called macros in Lisp.
I'll deal with them in order.
Functions which do not immediately evaluate their arguments
In a conventional Lisp, a form like (f x y ...), where f is a function, will:
establish that f is a function and not some special thing;
get the function corresponding to f and evaluate x, y, & the rest of the arguments in some order specified by the language (which may be 'in an unspecified order');
call f with the results of evaluating the arguments.
Step (1) is needed initially because f might be a special thing (like, say if, or quote), and it might be that the function definition is retrieved in (1) as well: all of this, as well as the order that things happen in in (2) is something the language needs to define (or, in the case of Scheme say, leave explicitly undefined).
This ordering, and in particular the ordering of (2) & (3) is known as applicative order or eager evaluation (I'll call it applicative order below).
But there are other possibilities. One such is that the arguments are not evaluated: the function is called, and only when the values of the arguments are needed are they evaluated. There are two approaches to doing this.
The first approach is to define the language so that all functions work this way. This is called lazy evaluation or normal order evaluation (I'll call it normal order below). In a normal order language function arguments are evaluated, by magic, at the point they are needed. If they are never needed then they may never be evaluated at all. So in such a language (I am inventing the syntax for function definition here so as not to commit CL or Clojure or anything else):
(def foo (x y z)
(if x y z))
Only one of y or z will be evaluated in a call to foo.
In a normal order language you don't need to explicitly care about when things get evaluated: the language makes sure that they are evaluated by the time they're needed.
Normal order languages seem like they'd be an obvious win, but they tend to be quite hard to work with, I think. There are two problems, one obvious and one less so:
side-effects happen in a less predictable order than they do in applicative order languages and may not happen at all, so people used to writing in an imperative style (which is most people) find them hard to deal with;
even side-effect-free code can behave differently than in an applicative order language.
The side-effect problem could be treated as a non-problem: we all know that code with side-effects is bad, right, so who cares about that? But even without side-effects things are different. For instance here's a definition of the Y combinator in a normal order language (this is kind of a very austere, normal order subset of Scheme):
(define Y
((λ (y)
(λ (f)
(f ((y y) f))))
(λ (y)
(λ (f)
(f ((y y) f))))))
If you try to use this version of Y in an applicative order language -- like ordinary Scheme -- it will loop for ever. Here's the applicative order version of Y:
(define Y
((λ (y)
(λ (f)
(f (λ (x)
(((y y) f) x)))))
(λ (y)
(λ (f)
(f (λ (x)
(((y y) f) x)))))))
You can see it's kind of the same, but there are extra λs in there which essentially 'lazify' the evaluation to stop it looping.
The second approach to normal order evaluation is to have a language which is mostly applicative order but in which there is some special mechanism for defining functions which don't evaluate their arguments. In this case there often would need to be some special mechanism for saying, in the body of the function, 'now I want the value of this argument'. Historically such things were called FEXPRs, and they existed in some very old Lisp implementations: Lisp 1.5 had them, and I think that both MACLISP & InterLisp had them as well.
In an applicative order language with FEXPRs, you need somehow to be able to say 'now I want to evaluate this thing', and I think this is the problem are running up against: at what point does the thing decide to evaluate the arguments? Well, in a really old Lisp which is purely dynamically scoped there's a disgusting hack to do this: when defining a FEXPR you can just pass in the source of the argument and then, when you want its value, you just call EVAL on it. That's just a terrible implementation because it means that FEXPRs can never really be compiled properly, and you have to use dynamic scope so variables can never really be compiled away. But this is how some (all?) early implementations did it.
But this implementation of FEXPRs allows an amazing hack: if you have a FEXPR which has been given the source of its arguments, and you know that this is how FEXPRs work, then, well, it can manipulate that source before calling EVAL on it: it can call EVAL on something derived from the source instead. And, in fact, the 'source' it gets given doesn't even need to be strictly legal Lisp at all: it can be something which the FEXPR knows how to manipulate to make something that is. That means you can, all of a sudden, extend the syntax of the language in pretty general ways. But the cost of being able to do that is that you can't compile any of this: the syntax you construct has to be interpreted at runtime, and the transformation happens each time the FEXPR is called.
Syntax transformers: macros
So, rather than use FEXPRs, you can do something else: you could change the way that evaluation works so that, before anything else happens, there is a stage during which the code is walked over and possibly transformed into some other code (simpler code, perhaps). And this need happen only once: once the code has been transformed, then the resulting thing can be stashed somewhere, and the transformation doesn't need to happen again. So the process now looks like this:
code is read in and structure built from it;
this initial structure is possibly transformed into other structure;
(the resulting structure is possibly compiled);
the resulting structure, or the result of compiling it is evaluated, probably many times.
So now the process of evaluation is divided into several 'times', which don't overlap (or don't overlap for a particular definition):
read time is when the initial structure is built;
macroexpansion time is when it is transformed;
compile time (which may not happen) is when the resulting thing is compiled;
evaluation time is when it is evaluated.
Well, compilers for all languages probably do something like this: before actually turning your source code into something that the machine understands they will do all sorts of source-to-source transformations. But these things are in the guts of the compiler and are operating on some representation of the source which is idiosyncratic to that compiler and not defined by the language.
Lisp opens this process to users. The language has two features which make this possible:
the structure that is created from source code once it has been read is defined by the language and the language has a rich set of tools for manipulating this structure;
the structure created is rather 'low commitment' or austere -- it does not particularly predispose you to any interpretation in many cases.
As an example of the second point, consider (in "my.file"): that's a function call of a function called in, right? Well, may be: (with-open-file (in "my.file") ...) almost certainly is not a function call, but binding in to a filehandle.
Because of these two features of the language (and in fact some others I won't go into) Lisp can do a wonderful thing: it can let users of the language write these syntax-transforming functions -- macros -- in portable Lisp.
The only thing that remains is to decide how these macros should be notated in source code. And the answer is the same way as functions are: when you define some macro m you use it just as (m ...) (some Lisps support more general things, such as CL's symbol macros. At macroexpansion time -- after the program is read but before it is (compiled and) run -- the system walks over the structure of the program looking for things which have macro definitions: when it finds them it calls the function corresponding to the macro with the source code specified by its arguments, and the macro returns some other chunk of source code, which gets walked in turn until there are no macros left (and yes, macros can expand to code involving other macros, and even to code involving themselves). Once this process is complete then the resulting code can be (compiled and) run.
So although macro look like function calls in the code, they are not just functions which don't evaluate their arguments, like FEXPRs were: instead they are functions which take a bit of Lisp source code and return another bit of Lisp source code: they're syntax transformers, or function which operate on source code (syntax) and return other source code. Macros run at macroexpansion time which is properly before evaluation time (see above).
So, in fact macros are functions, written in Lisp, and the functions they call evaluate their arguments perfectly conventionally: everything is perfectly ordinary. But the arguments to macros are programs (or the syntax of programs represented as Lisp objects of some kind) and their results are (the syntax of) other programs. Macros are functions at the meta-level, if you like. So a macro if a function which computes (parts of) programs: those programs may later themselves be run (perhaps much later, perhaps never) at which point the evaluation rules will be applied to them. But at the point a macro is called what it's dealing with is just the syntax of programs, not evaluating parts of that syntax.
So, I think your mental model is that macros are something like FEXPRs in which case the 'how does the argument get evaluated' question is an obvious thing to ask. But they're not: they're functions which compute programs, and they run properly before the program they compute is run.
Sorry this answer has been so long and rambling.
What happened to FEXPRs?
FEXPRs were always pretty problematic. For instance what should (apply f ...) do? Since f might be a FEXPR, but this can't generally be known until runtime it's quite hard to know what the right thing to do is.
So I think that two things happened:
in the cases where people really wanted normal order languages, they implemented those, and for those languages the evaluation rules dealt with the problems FEXPRs were trying to deal with;
in applicative order languages then if you want to not evaluate some argument you now do it by explicitly saying that using constructs such as delay to construct a 'promise' and force to force evaluation of a promise -- because the semantics of the languages improved it became possible to implement promises entirely in the language (CL does not have promises, but implementing them is essentially trivial).
Is the history I've described correct?
I don't know: I think it may be but it may also be a rational reconstruction. I certainly, in very old programs in very old Lisps, have seen FEXPRs being used the way I describe. I think Kent Pitman's paper, Special Forms in Lisp may have some of the history: I've read it in the past but had forgotten about it until just now.
A macro definition is a definition of a function that transforms code. The input for the macro function are the forms in the macro call. The return value of the macro function will be treated as code inserted where the macro form was. Clojure code is made of Clojure data structures (mostly lists, vectors, and maps).
In your foo macro, you define the macro function to return whatever blah did to your code. Since blah is (almost) the identity function, it just returns whatever was its input.
What is happening in your case is the following:
The string "(foo (+ 1 2 3))" is read, producing a nested list with two symbols and three integers: (foo (+ 1 2 3)).
The foo symbol is resolved to the macro foo.
The macro function foo is invoked with its argument xs bound to the list (+ 1 2 3).
The macro function (prints and then) calls the function blah with the list.
blah (prints and then) returns that list.
The macro function returns the list.
The macro is thus “expanded” to (+ 1 2 3).
The symbol + is resolved to the addition function.
The addition function is called with three arguments.
The addition function returns their sum.
If you wanted the macro foo to expand to a call to blah, you need to return such a form. Clojure provides a templating convenience syntax using backquote, so that you do not have to use list etc. to build the code:
(defmacro foo [xs]
`(blah ~xs))
which is like:
(defmacro foo [xs]
(list 'blah xs))

Enums and Clojure

In the Java/C world, people often use enums. If I'm using a Java library which using enums, I can convert between them and keywords, for example, using (. java.lang.Enum valueOf e..., (aget ^"[Ljava.lang.Enum;" (. e (getEnumConstants)) i), and some reflection. But in the Clojure world, do people ever need anything like an enum (a named integer) ? If not, how is their code structured that they don't need them ? If yes, what's the equivalent ? I sense I'm really asking about indices (for looping), which are rarely used in functional programming (I've used map-indexed only once so far).
For almost all the Clojure code I have seen keywords tend to be used instead of Enums they are name-spaced and have all the other useful properties of keywords while being much easier to write. They are not an exact standin because they are more dynamic (as in dynamic typing) than Java enums
as for indexing and looping I find it more idiomatic to map over a sequence of keywords:
(map do-stuff [:a :b :c :d] (range))
than to loop over the values in an enumeration, which I have yet to find an example of in Clojure code, though an example very likely exists ;-)
As Arthur points out - keywords are typically used in Clojure in place of enums.
You won't see numbered indexes used much - they aren't particularly idiomatic in Clojure (or most other functional programming languages)
Some other options worth being aware of:
Use Java enums directly - e.g. java.util.concurrent.TimeUnit/SECONDS
Define your own constants with vars if you want numeric values - e.g. (def ^:const PURPLE 1)
You could even implement a macro defenum if you wanted more advanced enum features with a simple syntax.
Yes, use keywords in most places where Java programmers would use enums. In the rare case that you need a number for each of them, you can simply define a map for converting: {:dog 0, :rabbit 1, ...}.
On the other hand, one of the first Clojure libraries I wrote was just this: a defenum macro that assigned numbers to symbols and created conversion systems back and forth. It's a terrible idea implemented reasonably well, so feel free to have a look but I don't recommend you use it.

Can I use the clojure 'for' macro to reverse a string?

This is a follow up to my question "Recursively reverse a sequence in Clojure".
Is it possible to reverse a sequence using the Clojure "for" macro? I'm trying to better understand the limitations and use-cases of this macro.
Here is the code I'm starting from:
((defn reverse-with-for [s]
(for [c s] c))
Possible?
If so, I assume the solution may require wrapping the for macro in some expression that defines a mutable var, or that the body-expr of the for macro will somehow pass a sequence to the next iteration (similar to map).
Clojure for macro is being used with arbitrary Clojure sequences.
These sequences may or may not expose random access like vectors do. So, in general case, you do not have access to the last element of a Clojure sequence without traversing all the way to it, which would make making a pass through it in reverse order not possible.
I'm assumming you had something like this in mind (Java-like pseudocode):
for(int i = n-1; i--; i<=0){
doSomething(array[i]);
}
In this example we know array size n in advance and we can access elements by its index. With Clojure sequences we don't know that. In Java it makes sense to do that with arrays and ArrayLists. Clojure sequences are however much more like linked lists - you have an element, and a reference to next one.
Btw, even if there were a (probably non-idiomatic)* way to do that, its time complexity would be something like O(n^2) which is just not worth the effort compared to much easier solution in the linked post which is O(n^2) for lists and a much better O(n) for vectors (and it is quite elegant and idiomatic. In fact, the official reverse has that implementation).
EDIT:
A general advice: Don't try to do imperative programming in Clojure, it wasn't designed for it. Although many things may seem strange or counter-intuitive (as opposed to well known idioms from imperative programming) once you get used to the functional way of doing things it is a lot, and I mean a lot easier.
Specifically for this question, despite the same name Java (and other C-like) for and Clojure for are not the same thing! First is an actual loop - it defines a flow control. The second one is a comprehension - look at it conceptually as a higher function of a sequence and a function f to be done for each of its element, which returns another sequence of f(element) s. Java for is a statement, it doesn't evaluate to anything, Clojure for (as well as anything else in Clojure) is an expression - it evaluates to the sequence of f(element) s.
Probably the easiest way to get the idea is to play with sequence functions library: http://clojure.org/sequences. Also, you can solve some problems on http://www.4clojure.com/. The first problems are very easy but they gradually get harder as you progress through them.
*As shown in Alexandre's answer the solution to the problem in fact is idiomatic and quite clever. Kudos for that! :)
Here's how you could reverse a string with for:
(defn reverse-with-for [s]
(apply str
(for [i (range (dec (count s)) -1 -1)]
(get s i))))
Note that this code is mutation free. It's the same as:
(defn reverse-with-map [s]
(apply str
(map (partial get s) (range (dec (count s)) -1 -1))))
A simpler solution would be:
(apply str (reverse s))
First of all, as Goran said, for is not a statement - it is an expression, namely sequence comprehension. It construct sequences by iteration through other sequences. So in the form it is meant to be used it is pure function (without side-effects). for can be seen as enhanced map infused with filter. Because of this it cannot be used to hold iteration state as e.g. reduce do.
Secondly, you can express sequence reversal using for and mutable state, e.g. using an atom, which is rough equivalent (not taking into account its concurrency properties) of java variable. But doing so you are facing several problems:
You are breaking main language paradigm so you will definitely get worse looking and behaving code.
Since all clojure mutable state cells are designed to be thread-safe, they all use some kind of illegal concurrent modification protection, and there is no ability to remove it. Consequently, you will get poorer performance characteristics.
In this particular case, like Goran said, sequences are one of the wide-used Clojure abstractions. For example, there are lazy sequences, which could be potentially infinite, so you just cannot walk them to the end. You certainly will have difficulties trying to work with such sequences with imperative techniques.
So don't do it, at least in Clojure :)
EDIT: I forgot to mention it. for returns lazy sequence, so you have to evaluate it in some way in order to apply all state mutations you do in it. Another reason not to do so :)

Scheme and Clojure don't have the atom type predicate - is this by design?

Common LISP and Emacs LISP have the atom type predicate. Scheme and Clojure don't have it. http://hyperpolyglot.wikidot.com/lisp
Is there a design reason for this - or is it just not an essential function to include in the API?
In Clojure, the atom predicate isn't so important because Clojure emphasizes various other types of (immutable) data structures rather than focusing on cons cells / lists.
It could also cause confusion. How would you expect this function to behave when given a hashmap, a set or a vector for example? Or a Java object that represents some complex mutable data structure?
Also the name "atom" is used for something completely different - it's one of Clojure's core concurrency mechanisms to manage shared, synchronous, independent state.
Clojure has the coll? (collection?) function, which is (sort of) the inverse of atom?.
In the book The Little Schemer, atom? is defined as follows:
(define (atom? x)
(and (not (pair? x))
(not (null? x))))
Noting that null is not considered an atom, as other answers have suggested. In the mentioned book atom? is used heavily, in particular when writing procedures that deal with lists of lists.
In the entire IronScheme standard libraries which implement R6RS, I never needed such a function.
In summary:
It is useless
It is easy enough to write if you need it
Which pretty much follows Scheme's minimalistic approach.
In Scheme anything that is not a pair is an atom. As Scheme already defines the predicate pair?, the atom? predicate is not needed, as it is so trivial to define:
(define (atom? s)
(not (pair? s)))
It's a trivial function:
(defun atom (x)
(not (consp x)))
It is used in list processing, when the Lisp dialect uses conses to build lists. There are some 'Lisps' for which this is not the case or not central.
Atom is either a symbol, a character, a number, or null.
(define (atom? a)
(or (symbol? a)
(char? a)
(number? a)
(null? a)))
I think those are all the atoms that exist, if you find more add to the conditional expression. For example, if you think a string is an atom, add (string? a), :-). The absence of a definition for atom, allows you to define it the way you want. After all, Scheme does not know what an atom is.
In Lisp nil is an atom, so I've made null an atom. nil is also a list by simplification nil = (nil . nil), the same way the integral numbers are rational numbers by simplification, 2 = 2/1, 2 is an integral number, 2/1 is a rational number, as both are equals by simplification of the rational one; one says the integral number 2 is also a rational number. But the list predicate is already defined in Scheme, nothing to worry about.
About the question. As long as I am concerned Scheme has predicates only for class types, atom is not a class type, atom is an abstraction that incorporates several class types. Maybe that is the reason. But pair is not a class type either, but it does not incorporate several class types, and yet some may consider pair as a class type.
Atom means that a certain thing is not a compound thing. One reason not to include such a predicate is when the language allows you to define atomic types, so the pletora of atoms can grow wider and wider, and such a predicate would make no sense. I don't know if Scheme allows for this. I can only say that Scheme predicates (the built-in ones) are all specific. You can ask, is this an apple?, is this an orange?; but you cannot ask is this a fruit?. :-). Well, you can, if you do it yourself. Despite what a said, Scheme has a general predicate number?, and number specific predicates, integer?, rational?, real?; notwithstanding, number can be thought of as a class type (the other predicates refer to sub-types of number), whereas atom is not (at least in Scheme).
Note:
class types: types that belong to a certain class of things. Example:
number, integer, real, rational, character, procedure, list, vector, string, etc.