(conj (drop-last "abcde") (last "abcde"))
returns (\e \a \b \c \d)
I am confusing. In the doc of conj, I notice
The 'addition' may happen at different 'places' depending on the concrete type.
Does it mean that for LazySeq, the place to add the new item is the head?
How can I get (\a \b \c \d \e) as the result?
'The 'addition' may happen at different 'places' depending on the
concrete type.'
This refers to the behavior of Clojure's persistent collections that incorporate the addition in the most efficient way with respect to performance and the underlying implementation.
Vectors always add to the end of the collection:
user=> (conj [1 2 3] 4)
[1 2 3 4]
With Lists, conj puts the item at the front of the list, as you've noticed:
user=> (conj '(1 2 3) 4)
(4 1 2 3)
So, yes, a LazySeq is treated like a List with respect to its concrete implementation.
How can I get (\a \b \c \d \e) as the result?
There's a number of ways, but you could easily create a vector from your LazySeq:
(conj (vec (drop-last "abcde"))
(last "abcde"))
It is important to realize that conj simply delegates to the implementation of cons on the IPersistentCollection interface in Clojure's Java stuff. Therefore, depending on the given data structure being dealt with it can behave differently.
The intent behind conj is that it will always add an item to the data structure in the way that is most efficient.
For lists the most efficient spot to put it is the front. For vectors the most efficient spot to put it is at the end.
Related
For me as, a new Clojurian, some core functions seem rather counter-intuitive and confusing when it comes to arguments order/position, here's an example:
> (nthrest (range 10) 5)
=> (5 6 7 8 9)
> (take-last 5 (range 10))
=> (5 6 7 8 9)
Perhaps there is some rule/logic behind it that I don't see yet?
I refuse to believe that the Clojure core team made so many brilliant technical decisions and forgot about consistency in function naming/argument ordering.
Or should I just remember it as it is?
Thanks
Slightly offtopic:
rand&rand-int VS random-sample - another example where function naming seems inconsistent but that's a rather rarely used function so it's not a big deal.
There is an FAQ on Clojure.org for this question: https://clojure.org/guides/faq#arg_order
What are the rules of thumb for arg order in core functions?
Primary collection operands come first. That way one can write → and its ilk, and their position is independent of whether or not they have variable arity parameters. There is a tradition of this in OO languages and Common Lisp (slot-value, aref, elt).
One way to think about sequences is that they are read from the left, and fed from the right:
<- [1 2 3 4]
Most of the sequence functions consume and produce sequences. So one way to visualize that is as a chain:
map <- filter <- [1 2 3 4]
and one way to think about many of the seq functions is that they are parameterized in some way:
(map f) <- (filter pred) <- [1 2 3 4]
So, sequence functions take their source(s) last, and any other parameters before them, and partial allows for direct parameterization as above. There is a tradition of this in functional languages and Lisps.
Note that this is not the same as taking the primary operand last. Some sequence functions have more than one source (concat, interleave). When sequence functions are variadic, it is usually in their sources.
Adapted from comments by Rich Hickey.
Functions that work with seqs usually has the actual seq as last argument.
(map, filter, remote etc.)
Accessing and "changing" individual elements takes a collection as first element: conj, assoc, get, update
That way, you can use the (->>) macro with a collection consistenly,
as well as create transducers consistently.
Only rarely one has to resort to (as->) to change argument order. And if you have to do so, it might be an opportunity to check if your own functions follow that convention.
For some functions (especially functions that are "seq in, seq out"), the args are ordered so that one can use partial as follows:
(ns tst.demo.core
(:use tupelo.core tupelo.test))
(dotest
(let [dozen (range 12)
odds-1 (filterv odd? dozen)
filter-odd (partial filterv odd?)
odds-2 (filter-odd dozen) ]
(is= odds-1 odds-2
[1 3 5 7 9 11])))
For other functions, Clojure often follows the ordering of "biggest-first", or "most-important-first" (usually these have the same result). Thus, we see examples like:
(get <map> <key>)
(get <map> <key> <default-val>)
This also shows that any optional values must, by definition, be last (in order to use "rest" args). This is common in most languages (e.g. Java).
For the record, I really dislike using partial functions, since they have user-defined names (at best) or are used inline (more common). Consider this code:
(let [dozen (range 12)
odds (filterv odd? dozen)
evens-1 (mapv (partial + 1) odds)
evens-2 (mapv #(+ 1 %) odds)
add-1 (fn [arg] (+ 1 arg))
evens-3 (mapv add-1 odds)]
(is= evens-1 evens-2 evens-3
[2 4 6 8 10 12]))
Also
I personally find it really annoying trying to parse out code using partial as with evens-1, especially for the case of user-defined functions, or even standard functions that are not as simple as +.
This is especially so if the partial is used with 2 or more args.
For the 1-arg case, the function literal seen for evens-2 is much more readable to me.
If 2 or more args are present, please make a named function (either local, as shown for evens-3), or a regular (defn some-fn ...) global function.
clojure has a handy (into to-coll from-coll) function, adding elements from from-coll to to-coll, retaining to-coll's type.
How can this one be implemented in common lisp?
The first attempt would be
(defun into (seq1 seq2)
(concatenate (type-of seq1) seq1 seq2))
but this one obviously fails, since type-of includes the vector's length in it's result, disallowing adding more elements (as of sbcl), though it still works for list as a first arg
(while still failing for empty list).
the question is: is it possible to make up this kind of function without using generic methods and/or complex type-of result processing (e.g. removing length for vectors/arrays etc) ?
i'm okay with into acting as append (in contrast with clojure, where into result depends on target collection type) Let's call it concat-into
In Clojure, you have a concrete idea (most of the time) of what kind that first collection is when you use into, because it changes the semantics: if it is a list, additional elements will be conjed onto the front, if it is a vector, they will be conjed to the back, if it is a map, you need to supply map entry designators (i. e. actual map entries or two-element vectors), sets are more flexible but also carry their own semantics. That's why I'd guess that using concatenate directly, explicitly supplying the type, is probably a good enough fit for many use cases.
Other than that, I think that it could be useful to extend this functionality (Common Lisp only has a closed set of sequence types), but for that, it seems too obviously convenient to use generic functions to ignore. It is not trivial to provide a solution that is extensible, generic, and performant.
EDIT: To summarize: no, you can't get that behaviour with clever application of one or two “built-ins”, but you can certainly write an extensible and generic solution using generic functions.
ok, the only thing i've come to (besides generic methods) is this dead simple function:
(defun into (target source)
(let ((target-type (etypecase target
(vector (list 'array (array-element-type target) (*)))
(list 'list))))
(concatenate target-type target source)))
CL-USER> (into (list 1 2 4) "asd")
;;=> (1 2 4 #\a #\s #\d)
CL-USER> (into #*0010 (list 1 1 0 0))
;;=> #*00101100
CL-USER> (into "asdasd" (list #\a #\b))
;;=> "asdasdab"
also the simple empty impl:
(defun empty (target)
(etypecase target
(vector (make-array 0
:element-type (array-element-type target)
:adjustable t :fill-pointer 0))
(list)))
The result indeed (as #Svante noted) doesn't have the exact type, but rather "the collection with the element type being the same as that of target". It doesn't conform the clojure's protocol (where list target should be prepended to).
Can't see where it flaws (if it does), so would be nice to hear about that.. Anyway, as it was only for the sake of education, that will do.
If I perform a side-effecting/mutating operation on individual data structures specific to each member of lazy sequence using map, do I need to (a) call doall first, to force realization of the original sequence before performing the imperative operations, or (b) call doall to force the side-effects to occur before I map a functional operation over the resulting sequence?
I believe that no doalls are necessary when there are no dependencies between elements of any sequence, since map can't apply a function to a member of a sequence until the functions from maps that produced that sequence have been applied to the corresponding element of the earlier sequence. Thus, for each element, the functions will be applied in the proper sequence, even though one of the functions produces side effects that a later function depends on. (I know that I can't assume that any element a will have been modified before element b is, but that doesn't matter.)
Is this correct?
That's the question, and if it's sufficiently clear, then there's no need to read further. The rest describes what I'm trying to do in more detail.
My application has a sequence of defrecord structures ("agents") each of which contains some core.matrix vectors (vec1, vec2) and a core.matrix matrix (mat). Suppose that for the sake of speed, I decide to (destructively, not functionally) modify the matrix.
The program performs the following three steps to each of the agents by calling map, three times, to apply each step to each agent.
Update a vector vec1 in each agent, functionally, using assoc.
Modify a matrix mat in each agent based on the preceding vector (i.e. the matrix will retain a different state).
Update a vector vec2 in each agent using assoc based on the state of the matrix produced by step 2.
For example, where persons is a sequence, possibly lazy (EDIT: Added outer doalls):
(doall
(->> persons
(map #(assoc % :vec1 (calc-vec1 %))) ; update vec1 from person
(map update-mat-from-vec1!) ; modify mat based on state of vec1
(map #(assoc % :vec2 (calc-vec2-from-mat %))))) ; update vec2 based on state of mat
Alternatively:
(doall
(map #(assoc % :vec2 (calc-vec2-from-mat %)) ; update vec2 based on state of mat
(map update-mat-from-vec1! ; modify mat based on state of vec1
(map #(assoc % :vec1 (calc-vec1 %)) persons)))) ; update vec1 from person
Note that no agent's state depends on the state of any other agent at any point. Do I need to add doalls?
EDIT: Overview of answers as of 4/16/2014:
I recommend reading all of the answers given, but it may seem as if they conflict. They don't, and I thought it might be useful if I summarized the main ideas:
(1) The answer to my question is "Yes": If, at the end of the process I described, one causes the entire lazy sequence to be realized, then what is done to each element will occur according to the correct sequence of steps (1, 2, 3). There is no need to apply doall before or after step 2, in which each element's data structure is mutated.
(2) But: This is a very bad idea; you are asking for trouble in the future. If at some point you inadvertently end up realizing all or part of the sequence at a time other than what you originally intended, it could turn out that the later steps get values from the data structure that were put there at at the wrong time--at a time other than what you expect. The step that mutates a per-element data structure won't happen until a given element of the lazy seq is realized, so if you realize it at the wrong time, you could get the wrong data in later steps. This could be the kind of bug that is very difficult to track down. (Thanks to #A.Webb for making this problem very clear.)
Use extreme caution mixing laziness with side effects
(defrecord Foo [fizz bang])
(def foos (map ->Foo (repeat 5 0) (map atom (repeat 5 1))))
(def foobars (map #(assoc % :fizz #(:bang %)) foos))
So will my fizz of foobars now be 1?
(:fizz (first foobars)) ;=> 1
Cool, now I'll leave foobars alone and work with my original foos...
(doseq [foo foos] (swap! (:bang foo) (constantly 42)))
Let's check on foobars
(:fizz (first foobars)) ;=> 1
(:fizz (second foobars)) ;=> 42
Whoops...
Generally, use doseq instead of map for your side effects or be aware of the consequences of delaying your side effects until realization.
You do not need to add any calls to doall provided you do something with the results later in your program. For instance if you ran the above maps, and did nothing with the result then none of the elements will be realized. On the other hand, if you read through the resulting sequence, to print it for instance, then each of your computations will happen in order on each element sequentially. That is steps 1, 2, and 3 will happen to the first thing in the input sequence, then steps 1, 2, and 3 will happen to the second and so forth. There is no need to pre-realize sequences to ensure the values are available, lazy evaluation will take care of that.
You don't need to add doall between two map operations. But unless you're working in a REPL, you do need to add doall or dorun to force the execution of your lazy sequence.
This is true, unless you care about the order of operations.
Let's consider the following example:
(defn f1 [x]
(print "1>" x ", ")
x)
(defn f2 [x]
(print "2>" x ", ")
x)
(defn foo [mycoll]
(->> mycoll
(map f1)
(map f2)
dorun))
By default clojure will take the first chunk of mycoll and apply f1 to all elements of this chunk. Then it'll apply f2 to the resulting chunk.
So, if mycoll if a list or an ordinary lazy sequence, you'll see that f1 and f2 are applied to each element in turn:
=> (foo (list \a \b))
1> a , 2> a , 1> b , 2> b , nil
or
=> (->> (iterate inc 7) (take 2) foo)
1> 7 , 2> 7 , 1> 8 , 2> 8 , nil
But if mycoll is a vector or chunked lazy sequence, you'll see quite a different thing:
=> (foo [\a \b])
1> a , 1> b , 2> a , 2> b , nil
Try
=> (foo (range 50))
and you'll see that it processes elements in chunks by 32 elements.
So, be careful using lazy calculations with side effects!
Here are some hints for you:
Always end you command with doall or dorun to force the calculation.
Use doall and comp to control the order of calculations, e.g.:
(->> [\a \b]
; apply both f1 and f2 before moving to the next element
(map (comp f2 f1))
dorun)
(->> (list \a \b)
(map f1)
; process the whole sequence before applying f2
doall
(map f2)
dorun)
map always produces a lazy result, even for a non-lazy input. You should call doall (or dorun if the sequence will never be used and the mapping is only done for side effects) on the output of map if you need to force some imperative side effect (for example use a file handle or db connection before it is closed).
user> (do (map println [0 1 2 3]) nil)
nil
user> (do (doall (map println [0 1 2 3])) nil)
0
1
2
3
nil
What's the one-level sequence flattening function in Clojure? I am using apply concat for now, but I wonder if there is a built-in function for that, either in standard library or clojure-contrib.
My general first choice is apply concat. Also, don't overlook (for [subcoll coll, item subcoll] item) -- depending on the broader context, this may result in clearer code.
There's no standard function. apply concat is a good solution in many cases. Or you can equivalently use mapcat seq.
The problem with apply concat is that it fails when there is anything other than a collection/sequential is at the first level:
(apply concat [1 [2 3] [4 [5]]])
=> IllegalArgumentException Don't know how to create ISeq from: java.lang.Long...
Hence you may want to do something like:
(defn flatten-one-level [coll]
(mapcat #(if (sequential? %) % [%]) coll))
(flatten-one-level [1 [2 3] [4 [5]]])
=> (1 2 3 4 [5])
As a more general point, the lack of a built-in function should not usually stop you from defining your own :-)
i use apply concat too - i don't think there's anything else in the core.
flatten is multiple levels (and is defined via a tree-walk, not in terms of repeated single level expansion)
see also Clojure: Semi-Flattening a nested Sequence which has a flatten-1 from clojure mvc (and which is much more complex than i expected).
update to clarify laziness:
user=> (take 3 (apply concat (for [i (range 1e6)] (do (print i) [i]))))
012345678910111213141516171819202122232425262728293031(0 1 2)
you can see that it evaluates the argument 32 times - this is chunking for efficiency, and is otherwise lazy (it doesn't evaluate the whole list). for a discussion of chunking see comments at end of http://isti.bitbucket.org/2012/04/01/pipes-clojure-choco-1.html
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What are some common mistakes made by Clojure developers, and how can we avoid them?
For example; newcomers to Clojure think that the contains? function works the same as java.util.Collection#contains. However, contains? will only work similarly when used with indexed collections like maps and sets and you're looking for a given key:
(contains? {:a 1 :b 2} :b)
;=> true
(contains? {:a 1 :b 2} 2)
;=> false
(contains? #{:a 1 :b 2} :b)
;=> true
When used with numerically indexed collections (vectors, arrays) contains? only checks that the given element is within the valid range of indexes (zero-based):
(contains? [1 2 3 4] 4)
;=> false
(contains? [1 2 3 4] 0)
;=> true
If given a list, contains? will never return true.
Literal Octals
At one point I was reading in a matrix which used leading zeros to maintain proper rows and columns. Mathematically this is correct, since leading zero obviously don't alter the underlying value. Attempts to define a var with this matrix, however, would fail mysteriously with:
java.lang.NumberFormatException: Invalid number: 08
which totally baffled me. The reason is that Clojure treats literal integer values with leading zeros as octals, and there is no number 08 in octal.
I should also mention that Clojure supports traditional Java hexadecimal values via the 0x prefix. You can also use any base between 2 and 36 by using the "base+r+value" notation, such as 2r101010 or 36r16 which are 42 base ten.
Trying to return literals in an anonymous function literal
This works:
user> (defn foo [key val]
{key val})
#'user/foo
user> (foo :a 1)
{:a 1}
so I believed this would also work:
(#({%1 %2}) :a 1)
but it fails with:
java.lang.IllegalArgumentException: Wrong number of args passed to: PersistentArrayMap
because the #() reader macro gets expanded to
(fn [%1 %2] ({%1 %2}))
with the map literal wrapped in parenthesis. Since it's the first element, it's treated as a function (which a literal map actually is), but no required arguments (such as a key) are provided. In summary, the anonymous function literal does not expand to
(fn [%1 %2] {%1 %2}) ; notice the lack of parenthesis
and so you can't have any literal value ([], :a, 4, %) as the body of the anonymous function.
Two solutions have been given in the comments. Brian Carper suggests using sequence implementation constructors (array-map, hash-set, vector) like so:
(#(array-map %1 %2) :a 1)
while Dan shows that you can use the identity function to unwrap the outer parenthesis:
(#(identity {%1 %2}) :a 1)
Brian's suggestion actually brings me to my next mistake...
Thinking that hash-map or array-map determine the unchanging concrete map implementation
Consider the following:
user> (class (hash-map))
clojure.lang.PersistentArrayMap
user> (class (hash-map :a 1))
clojure.lang.PersistentHashMap
user> (class (assoc (apply array-map (range 2000)) :a :1))
clojure.lang.PersistentHashMap
While you generally won't have to worry about the concrete implementation of a Clojure map, you should know that functions which grow a map - like assoc or conj - can take a PersistentArrayMap and return a PersistentHashMap, which performs faster for larger maps.
Using a function as the recursion point rather than a loop to provide initial bindings
When I started out, I wrote a lot of functions like this:
; Project Euler #3
(defn p3
([] (p3 775147 600851475143 3))
([i n times]
(if (and (divides? i n) (fast-prime? i times)) i
(recur (dec i) n times))))
When in fact loop would have been more concise and idiomatic for this particular function:
; Elapsed time: 387 msecs
(defn p3 [] {:post [(= % 6857)]}
(loop [i 775147 n 600851475143 times 3]
(if (and (divides? i n) (fast-prime? i times)) i
(recur (dec i) n times))))
Notice that I replaced the empty argument, "default constructor" function body (p3 775147 600851475143 3) with a loop + initial binding. The recur now rebinds the loop bindings (instead of the fn parameters) and jumps back to the recursion point (loop, instead of fn).
Referencing "phantom" vars
I'm speaking about the type of var you might define using the REPL - during your exploratory programming - then unknowingly reference in your source. Everything works fine until you reload the namespace (perhaps by closing your editor) and later discover a bunch of unbound symbols referenced throughout your code. This also happens frequently when you're refactoring, moving a var from one namespace to another.
Treating the for list comprehension like an imperative for loop
Essentially you're creating a lazy list based on existing lists rather than simply performing a controlled loop. Clojure's doseq is actually more analogous to imperative foreach looping constructs.
One example of how they're different is the ability to filter which elements they iterate over using arbitrary predicates:
user> (for [n '(1 2 3 4) :when (even? n)] n)
(2 4)
user> (for [n '(4 3 2 1) :while (even? n)] n)
(4)
Another way they're different is that they can operate on infinite lazy sequences:
user> (take 5 (for [x (iterate inc 0) :when (> (* x x) 3)] (* 2 x)))
(4 6 8 10 12)
They also can handle more than one binding expression, iterating over the rightmost expression first and working its way left:
user> (for [x '(1 2 3) y '(\a \b \c)] (str x y))
("1a" "1b" "1c" "2a" "2b" "2c" "3a" "3b" "3c")
There's also no break or continue to exit prematurely.
Overuse of structs
I come from an OOPish background so when I started Clojure my brain was still thinking in terms of objects. I found myself modeling everything as a struct because its grouping of "members", however loose, made me feel comfortable. In reality, structs should mostly be considered an optimization; Clojure will share the keys and some lookup information to conserve memory. You can further optimize them by defining accessors to speed up the key lookup process.
Overall you don't gain anything from using a struct over a map except for performance, so the added complexity might not be worth it.
Using unsugared BigDecimal constructors
I needed a lot of BigDecimals and was writing ugly code like this:
(let [foo (BigDecimal. "1") bar (BigDecimal. "42.42") baz (BigDecimal. "24.24")]
when in fact Clojure supports BigDecimal literals by appending M to the number:
(= (BigDecimal. "42.42") 42.42M) ; true
Using the sugared version cuts out a lot of the bloat. In the comments, twils mentioned that you can also use the bigdec and bigint functions to be more explicit, yet remain concise.
Using the Java package naming conversions for namespaces
This isn't actually a mistake per se, but rather something that goes against the idiomatic structure and naming of a typical Clojure project. My first substantial Clojure project had namespace declarations - and corresponding folder structures - like this:
(ns com.14clouds.myapp.repository)
which bloated up my fully-qualified function references:
(com.14clouds.myapp.repository/load-by-name "foo")
To complicate things even more, I used a standard Maven directory structure:
|-- src/
| |-- main/
| | |-- java/
| | |-- clojure/
| | |-- resources/
| |-- test/
...
which is more complex than the "standard" Clojure structure of:
|-- src/
|-- test/
|-- resources/
which is the default of Leiningen projects and Clojure itself.
Maps utilize Java's equals() rather than Clojure's = for key matching
Originally reported by chouser on IRC, this usage of Java's equals() leads to some unintuitive results:
user> (= (int 1) (long 1))
true
user> ({(int 1) :found} (int 1) :not-found)
:found
user> ({(int 1) :found} (long 1) :not-found)
:not-found
Since both Integer and Long instances of 1 are printed the same by default, it can be difficult to detect why your map isn't returning any values. This is especially true when you pass your key through a function which, perhaps unbeknownst to you, returns a long.
It should be noted that using Java's equals() instead of Clojure's = is essential for maps to conform to the java.util.Map interface.
I'm using Programming Clojure by Stuart Halloway, Practical Clojure by Luke VanderHart, and the help of countless Clojure hackers on IRC and the mailing list to help along my answers.
Forgetting to force evaluation of lazy seqs
Lazy seqs aren't evaluated unless you ask them to be evaluated. You might expect this to print something, but it doesn't.
user=> (defn foo [] (map println [:foo :bar]) nil)
#'user/foo
user=> (foo)
nil
The map is never evaluated, it's silently discarded, because it's lazy. You have to use one of doseq, dorun, doall etc. to force evaluation of lazy sequences for side-effects.
user=> (defn foo [] (doseq [x [:foo :bar]] (println x)) nil)
#'user/foo
user=> (foo)
:foo
:bar
nil
user=> (defn foo [] (dorun (map println [:foo :bar])) nil)
#'user/foo
user=> (foo)
:foo
:bar
nil
Using a bare map at the REPL kind of looks like it works, but it only works because the REPL forces evaluation of lazy seqs itself. This can make the bug even harder to notice, because your code works at the REPL and doesn't work from a source file or inside a function.
user=> (map println [:foo :bar])
(:foo
:bar
nil nil)
I'm a Clojure noob. More advanced users may have more interesting problems.
trying to print infinite lazy sequences.
I knew what I was doing with my lazy sequences, but for debugging purposes I inserted some print/prn/pr calls, temporarily having forgotten what it is I was printing. Funny, why's my PC all hung up?
trying to program Clojure imperatively.
There is some temptation to create a whole lot of refs or atoms and write code that constantly mucks with their state. This can be done, but it's not a good fit. It may also have poor performance, and rarely benefit from multiple cores.
trying to program Clojure 100% functionally.
A flip side to this: Some algorithms really do want a bit of mutable state. Religiously avoiding mutable state at all costs may result in slow or awkward algorithms. It takes judgement and a bit of experience to make the decision.
trying to do too much in Java.
Because it's so easy to reach out to Java, it's sometimes tempting to use Clojure as a scripting language wrapper around Java. Certainly you'll need to do exactly this when using Java library functionality, but there's little sense in (e.g.) maintaining data structures in Java, or using Java data types such as collections for which there are good equivalents in Clojure.
Lots of things already mentioned. I'll just add one more.
Clojure if treats Java Boolean objects always as true even if it's value is false. So if you have a java land function that returns a java Boolean value, make sure you do not check it directly
(if java-bool "Yes" "No")
but rather
(if (boolean java-bool) "Yes" "No").
I got burned by this with clojure.contrib.sql library that returns database boolean fields as java Boolean objects.
Keeping your head in loops.
You risk running out of memory if you loop over the elements of a potentially very large, or infinite, lazy sequence while keeping a reference to the first element.
Forgetting there's no TCO.
Regular tail-calls consume stack space, and they will overflow if you're not careful. Clojure has 'recur and 'trampoline to handle many of the cases where optimized tail-calls would be used in other languages, but these techniques have to be intentionally applied.
Not-quite-lazy sequences.
You may build a lazy sequence with 'lazy-seq or 'lazy-cons (or by building upon higher level lazy APIs), but if you wrap it in 'vec or pass it through some other function that realizes the sequence, then it will no longer be lazy. Both the stack and the heap can be overflown by this.
Putting mutable things in refs.
You can technically do it, but only the object reference in the ref itself is governed by the STM - not the referred object and its fields (unless they are immutable and point to other refs). So whenever possible, prefer to only immutable objects in refs. Same thing goes for atoms.
using loop ... recur to process sequences when map will do.
(defn work [data]
(do-stuff (first data))
(recur (rest data)))
vs.
(map do-stuff data)
The map function (in the latest branch) uses chunked sequences and many other optimizations. Also, because this function is frequently run, the Hotspot JIT usually has it optimized and ready to go with out any "warm up time".
Collection types have different behaviors for some operations:
user=> (conj '(1 2 3) 4)
(4 1 2 3) ;; new element at the front
user=> (conj [1 2 3] 4)
[1 2 3 4] ;; new element at the back
user=> (into '(3 4) (list 5 6 7))
(7 6 5 3 4)
user=> (into [3 4] (list 5 6 7))
[3 4 5 6 7]
Working with strings can be confusing (I still don't quite get them). Specifically, strings are not the same as sequences of characters, even though sequence functions work on them:
user=> (filter #(> (int %) 96) "abcdABCDefghEFGH")
(\a \b \c \d \e \f \g \h)
To get a string back out, you'd need to do:
user=> (apply str (filter #(> (int %) 96) "abcdABCDefghEFGH"))
"abcdefgh"
too many parantheses, especially with void java method call inside which results in NPE:
public void foo() {}
((.foo))
results in NPE from outer parantheses because inner parantheses evaluate to nil.
public int bar() { return 5; }
((.bar))
results in the easier to debug:
java.lang.Integer cannot be cast to clojure.lang.IFn
[Thrown class java.lang.ClassCastException]