What's the one-level sequence flattening function in Clojure? - clojure

What's the one-level sequence flattening function in Clojure? I am using apply concat for now, but I wonder if there is a built-in function for that, either in standard library or clojure-contrib.

My general first choice is apply concat. Also, don't overlook (for [subcoll coll, item subcoll] item) -- depending on the broader context, this may result in clearer code.

There's no standard function. apply concat is a good solution in many cases. Or you can equivalently use mapcat seq.
The problem with apply concat is that it fails when there is anything other than a collection/sequential is at the first level:
(apply concat [1 [2 3] [4 [5]]])
=> IllegalArgumentException Don't know how to create ISeq from: java.lang.Long...
Hence you may want to do something like:
(defn flatten-one-level [coll]
(mapcat #(if (sequential? %) % [%]) coll))
(flatten-one-level [1 [2 3] [4 [5]]])
=> (1 2 3 4 [5])
As a more general point, the lack of a built-in function should not usually stop you from defining your own :-)

i use apply concat too - i don't think there's anything else in the core.
flatten is multiple levels (and is defined via a tree-walk, not in terms of repeated single level expansion)
see also Clojure: Semi-Flattening a nested Sequence which has a flatten-1 from clojure mvc (and which is much more complex than i expected).
update to clarify laziness:
user=> (take 3 (apply concat (for [i (range 1e6)] (do (print i) [i]))))
012345678910111213141516171819202122232425262728293031(0 1 2)
you can see that it evaluates the argument 32 times - this is chunking for efficiency, and is otherwise lazy (it doesn't evaluate the whole list). for a discussion of chunking see comments at end of http://isti.bitbucket.org/2012/04/01/pipes-clojure-choco-1.html

Related

Clojure Core function argument positions seem rather confusing. What's the logic behind it?

For me as, a new Clojurian, some core functions seem rather counter-intuitive and confusing when it comes to arguments order/position, here's an example:
> (nthrest (range 10) 5)
=> (5 6 7 8 9)
> (take-last 5 (range 10))
=> (5 6 7 8 9)
Perhaps there is some rule/logic behind it that I don't see yet?
I refuse to believe that the Clojure core team made so many brilliant technical decisions and forgot about consistency in function naming/argument ordering.
Or should I just remember it as it is?
Thanks
Slightly offtopic:
rand&rand-int VS random-sample - another example where function naming seems inconsistent but that's a rather rarely used function so it's not a big deal.
There is an FAQ on Clojure.org for this question: https://clojure.org/guides/faq#arg_order
What are the rules of thumb for arg order in core functions?
Primary collection operands come first. That way one can write → and its ilk, and their position is independent of whether or not they have variable arity parameters. There is a tradition of this in OO languages and Common Lisp (slot-value, aref, elt).
One way to think about sequences is that they are read from the left, and fed from the right:
<- [1 2 3 4]
Most of the sequence functions consume and produce sequences. So one way to visualize that is as a chain:
map <- filter <- [1 2 3 4]
and one way to think about many of the seq functions is that they are parameterized in some way:
(map f) <- (filter pred) <- [1 2 3 4]
So, sequence functions take their source(s) last, and any other parameters before them, and partial allows for direct parameterization as above. There is a tradition of this in functional languages and Lisps.
Note that this is not the same as taking the primary operand last. Some sequence functions have more than one source (concat, interleave). When sequence functions are variadic, it is usually in their sources.
Adapted from comments by Rich Hickey.
Functions that work with seqs usually has the actual seq as last argument.
(map, filter, remote etc.)
Accessing and "changing" individual elements takes a collection as first element: conj, assoc, get, update
That way, you can use the (->>) macro with a collection consistenly,
as well as create transducers consistently.
Only rarely one has to resort to (as->) to change argument order. And if you have to do so, it might be an opportunity to check if your own functions follow that convention.
For some functions (especially functions that are "seq in, seq out"), the args are ordered so that one can use partial as follows:
(ns tst.demo.core
(:use tupelo.core tupelo.test))
(dotest
(let [dozen (range 12)
odds-1 (filterv odd? dozen)
filter-odd (partial filterv odd?)
odds-2 (filter-odd dozen) ]
(is= odds-1 odds-2
[1 3 5 7 9 11])))
For other functions, Clojure often follows the ordering of "biggest-first", or "most-important-first" (usually these have the same result). Thus, we see examples like:
(get <map> <key>)
(get <map> <key> <default-val>)
This also shows that any optional values must, by definition, be last (in order to use "rest" args). This is common in most languages (e.g. Java).
For the record, I really dislike using partial functions, since they have user-defined names (at best) or are used inline (more common). Consider this code:
(let [dozen (range 12)
odds (filterv odd? dozen)
evens-1 (mapv (partial + 1) odds)
evens-2 (mapv #(+ 1 %) odds)
add-1 (fn [arg] (+ 1 arg))
evens-3 (mapv add-1 odds)]
(is= evens-1 evens-2 evens-3
[2 4 6 8 10 12]))
Also
I personally find it really annoying trying to parse out code using partial as with evens-1, especially for the case of user-defined functions, or even standard functions that are not as simple as +.
This is especially so if the partial is used with 2 or more args.
For the 1-arg case, the function literal seen for evens-2 is much more readable to me.
If 2 or more args are present, please make a named function (either local, as shown for evens-3), or a regular (defn some-fn ...) global function.

testing if something is an empty list

Which way should I prefer to test if an object is an empty list in Clojure? Note that I want to test just this and not if it is empty as a sequence. If it is a "lazy entity" (LazySeq, Iterate, ...) I don't want it to get realized?.
Below I give some possible tests for x.
;0
(= clojure.lang.PersistentList$EmptyList (class x))
;1
(and (list? x) (empty? x))
;2
(and (list? x) (zero? (count x)))
;3
(identical? () x)
Test 0 is a little low level and relies on "implementation details". My first version of it was (instance? clojure.lang.PersistentList$EmptyList x), which gives an IllegalAccessError. Why is that so? Shouldn't such a test be possible?
Tests 1 and 2 are higher level and more general, since list? checks if something implements IPersistentList. I guess they are slightly less efficient too. Notice that the order of the two sub-tests is important as we rely on short-circuiting.
Test 3 works under the assumption that every empty list is the same object. The tests I have done confirm this assumption but is it guaranteed to hold? Even if it is so, is it a good practice to rely on this fact?
All this may seem trivial but I was a bit puzzled not finding a completely straightforward solution (or even a built-in function) for such a simple task.
update
Perhaps I did not formulate the question very well. In retrospect, I realized that what I wanted to test was if something is a non-lazy empty sequence. The most crucial requirement for my use case is that, if it is a lazy sequence, it does not get realized, i.e. no thunk gets forced.
Using the term "list" was a little confusing. After all what is a list? If it is something concrete like PersistentList, then it is non-lazy. If it is something abstract like IPersistentList (which is what list? tests and probably the correct answer), then non-laziness is not exactly guaranteed. It just so happens that Clojure's current lazy sequence types do not implement this interface.
So first of all I need a way to test if something is a lazy sequence. The best solution I can think of right now is to use IPending to test for laziness in general:
(def lazy? (partial instance? clojure.lang.IPending))
Although there are some lazy sequence types (e.g. chunked sequences like Range and LongRange) that do not implement IPending, it seems reasonable to expect that lazy sequences implement it in general. LazySeq does so and this is what really matters in my specific use case.
Now, relying on short-circuiting to prevent realization by empty? (and to prevent giving it an unacceptable argument), we have:
(defn empty-eager-seq? [x] (and (not (lazy? x)) (seq? x) (empty? x)))
Or, if we know we are dealing with sequences like in my case, we can use the less restrictive:
(defn empty-eager? [x] (and (not (lazy? x)) (empty? x)))
Of course we can write safe tests for more general types like:
(defn empty-eager-coll? [x] (and (not (lazy? x)) (coll? x) (empty? x)))
(defn empty-eager-seqable? [x] (and (not (lazy? x)) (seqable? x) (empty? x)))
That being said, the recommended test 1 also works for my case, thanks to short-circuiting and the fact that LazySeq does not implement IPersistentList. Given this and that the question's formulation was suboptimal, I will accept Lee's succinct answer and thank Alan Thompson for his time and for the helpful mini-discussion we had with an upvote.
Option 0 should be avoided since it relies on a class within clojure.lang that is not part of the public API for the package: From the javadoc for clojure.lang:
The only class considered part of the public API is IFn. All other
classes should be considered implementation details.
Option 1 uses functions from the public API and avoids iterating the entire input sequence if it is non-empty
Option 2 iterates the entire input sequence to obtain the count which is potentially expensive.
Option 3 does not appear to be guaranteed and can be circumvented with reflection:
(identical? '() (.newInstance (first (.getDeclaredConstructors (class '()))) (into-array [{}])))
=> false
Given these I'd prefer option 1.
Just use choice (1):
(ns tst.demo.core
(:use tupelo.core tupelo.test) )
(defn empty-list? [arg] (and (list? arg)
(not (seq arg))))
(dotest
(isnt (empty-list? (range)))
(isnt (empty-list? [1 2 3]))
(isnt (empty-list? (list 1 2 3)))
(is (empty-list? (list)))
(isnt (empty-list? []))
(isnt (empty-list? {}))
(isnt (empty-list? #{})))
with result:
-------------------------------
Clojure 1.10.1 Java 13
-------------------------------
Testing tst.demo.core
Ran 2 tests containing 7 assertions.
0 failures, 0 errors.
As you can see by the first test with (range), the infinite lazy seq didn't get realized by empty?.
Update
Choice 0 depends on implementation details (unlikely to change, but why bother?). Also, it is noisier to read.
Choice 2 will blow up for infinite lazy seq's.
Choice 3 is not guaranteed to work. You could have more than one list with zero elements.
Update #2
OK, you are correct re (2). We get:
(type (range)) => clojure.lang.Iterate
Notice that it is not a Lazy-Seq as both you and I expected.
So you are relying on a (non-obvious) detail to prevent getting to count, which will blow up for an infinite lazy seq. Too subtle for my taste. My motto: Keep it as obvious as possible
Re choice (3), again it relies on the implementation detail of (the current release of) Clojure. I could almost make it fail except that clojure.lang.PersistentList$EmptyList is a package-protected inner class, so I would have to really try hard (subvert Java inheritance) to make a duplicate instance of the class, which would then fail.
However, I can come close:
(defn el3? [arg] (identical? () arg))
(dotest
(spyx (type (range)))
(isnt (el3? (range)))
(isnt (el3? [1 3 3]))
(isnt (el3? (list 1 3 3)))
(is (el3? (list)))
(isnt (el3? []))
(isnt (el3? {}))
(isnt (el3? #{}))
(is (el3? ()))
(is (el3? '()))
(is (el3? (list)))
(is (el3? (spyxx (rest [1]))))
(let [jull (LinkedList.)]
(spyx jull)
(spyx (type jull))
(spyx (el3? jull))) ; ***** contrived, but it fails *****
with result
jull => ()
(type jull) => java.util.LinkedList
(el3? jull) => false
So, I again make a plea to keep it obvious and simple.
There are two ways of constructing a software design. One way is to
make it so simple that there are obviously no deficiencies. And the
other way is to make it so complicated that there are no obvious
deficiencies.
---C.A.R. Hoare

Clojure: Validation in a Variadic function

Bearing in mind, “Programs are meant to be read by humans and only incidentally for computers to execute.”
I want to better understand how I can best express the intent of the argument(s) passed to a function.
For example, I have a very simple function:
user=> (defn add-vecs
"Add multiple vectors of numbers together"
[& vecs]
(when (seq vecs)
(apply mapv + vecs)))
Which in action will do the following:
user=> (add-vecs [1 2 3] [1 2 3] [1 2 3])
[3 6 9]
I don’t like relying on the doc-string to tell the reader of the code the requirements of the argument because comments often lie, developers don’t always read them and code changes but comments rarely do.
I could rename vecs something like "vecs-of-numbers" but that feels clumsy.
I appreciate Clojure is both dynamic and strongly typed and this question is clearly in the area of typing. So in general, and if possible some examples for this case, what would make the function argument(s) more readable?
When what you want is a type, use a type. There are several approaches to typing available. I'll use prismatic.schema in this case because it's fairly unimposing:
(require '[schema [core :as schema]]])
(schema/defn add-vecs [& vecs :- [Schema/Number]]
... code here ... )
or you can add a pre-condition to check it as runtime:

Splice in Clojure

is there a single function for getting "from x to y" items in a sequence?
For example, given (range 10) I want [5 6 7 8] (take from 6th to nineth, or take 4 from the 6th,). Of course I can have this with a combination of a couple of functions (eg (take 4 (drop 5 (range 10)))), but is seems strange that there's not a built-in like pythons's mylist[5:9]. Thanks
subvec for vectors, primarily since it is O(1). For seqs you will need to use the O(n) of take/drop.
From a philosophical point of view, the reason there's no built-in operator is that you don't need a built-in operator to make it feel "natural" like you do in Python.
(defn splice [coll start stop]
(take (- stop start) (drop start coll)))
(splice coll 6 10)
Feels just like a language built-in, with exactly as much "new syntax" as any feature. In Python, the special [x:y] operator needs language-level support to make it feel as natural as the single-element accessor.
So rather than cluttering up the (already crowded) language core, Clojure simply leaves room for a user or library to implement this if you want it.
(range 5 9), or (vec (range 5 9)).
(Perhaps this syntax for range wasn't available mid-2012.)

Common programming mistakes for Clojure developers to avoid [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What are some common mistakes made by Clojure developers, and how can we avoid them?
For example; newcomers to Clojure think that the contains? function works the same as java.util.Collection#contains. However, contains? will only work similarly when used with indexed collections like maps and sets and you're looking for a given key:
(contains? {:a 1 :b 2} :b)
;=> true
(contains? {:a 1 :b 2} 2)
;=> false
(contains? #{:a 1 :b 2} :b)
;=> true
When used with numerically indexed collections (vectors, arrays) contains? only checks that the given element is within the valid range of indexes (zero-based):
(contains? [1 2 3 4] 4)
;=> false
(contains? [1 2 3 4] 0)
;=> true
If given a list, contains? will never return true.
Literal Octals
At one point I was reading in a matrix which used leading zeros to maintain proper rows and columns. Mathematically this is correct, since leading zero obviously don't alter the underlying value. Attempts to define a var with this matrix, however, would fail mysteriously with:
java.lang.NumberFormatException: Invalid number: 08
which totally baffled me. The reason is that Clojure treats literal integer values with leading zeros as octals, and there is no number 08 in octal.
I should also mention that Clojure supports traditional Java hexadecimal values via the 0x prefix. You can also use any base between 2 and 36 by using the "base+r+value" notation, such as 2r101010 or 36r16 which are 42 base ten.
Trying to return literals in an anonymous function literal
This works:
user> (defn foo [key val]
{key val})
#'user/foo
user> (foo :a 1)
{:a 1}
so I believed this would also work:
(#({%1 %2}) :a 1)
but it fails with:
java.lang.IllegalArgumentException: Wrong number of args passed to: PersistentArrayMap
because the #() reader macro gets expanded to
(fn [%1 %2] ({%1 %2}))
with the map literal wrapped in parenthesis. Since it's the first element, it's treated as a function (which a literal map actually is), but no required arguments (such as a key) are provided. In summary, the anonymous function literal does not expand to
(fn [%1 %2] {%1 %2}) ; notice the lack of parenthesis
and so you can't have any literal value ([], :a, 4, %) as the body of the anonymous function.
Two solutions have been given in the comments. Brian Carper suggests using sequence implementation constructors (array-map, hash-set, vector) like so:
(#(array-map %1 %2) :a 1)
while Dan shows that you can use the identity function to unwrap the outer parenthesis:
(#(identity {%1 %2}) :a 1)
Brian's suggestion actually brings me to my next mistake...
Thinking that hash-map or array-map determine the unchanging concrete map implementation
Consider the following:
user> (class (hash-map))
clojure.lang.PersistentArrayMap
user> (class (hash-map :a 1))
clojure.lang.PersistentHashMap
user> (class (assoc (apply array-map (range 2000)) :a :1))
clojure.lang.PersistentHashMap
While you generally won't have to worry about the concrete implementation of a Clojure map, you should know that functions which grow a map - like assoc or conj - can take a PersistentArrayMap and return a PersistentHashMap, which performs faster for larger maps.
Using a function as the recursion point rather than a loop to provide initial bindings
When I started out, I wrote a lot of functions like this:
; Project Euler #3
(defn p3
([] (p3 775147 600851475143 3))
([i n times]
(if (and (divides? i n) (fast-prime? i times)) i
(recur (dec i) n times))))
When in fact loop would have been more concise and idiomatic for this particular function:
; Elapsed time: 387 msecs
(defn p3 [] {:post [(= % 6857)]}
(loop [i 775147 n 600851475143 times 3]
(if (and (divides? i n) (fast-prime? i times)) i
(recur (dec i) n times))))
Notice that I replaced the empty argument, "default constructor" function body (p3 775147 600851475143 3) with a loop + initial binding. The recur now rebinds the loop bindings (instead of the fn parameters) and jumps back to the recursion point (loop, instead of fn).
Referencing "phantom" vars
I'm speaking about the type of var you might define using the REPL - during your exploratory programming - then unknowingly reference in your source. Everything works fine until you reload the namespace (perhaps by closing your editor) and later discover a bunch of unbound symbols referenced throughout your code. This also happens frequently when you're refactoring, moving a var from one namespace to another.
Treating the for list comprehension like an imperative for loop
Essentially you're creating a lazy list based on existing lists rather than simply performing a controlled loop. Clojure's doseq is actually more analogous to imperative foreach looping constructs.
One example of how they're different is the ability to filter which elements they iterate over using arbitrary predicates:
user> (for [n '(1 2 3 4) :when (even? n)] n)
(2 4)
user> (for [n '(4 3 2 1) :while (even? n)] n)
(4)
Another way they're different is that they can operate on infinite lazy sequences:
user> (take 5 (for [x (iterate inc 0) :when (> (* x x) 3)] (* 2 x)))
(4 6 8 10 12)
They also can handle more than one binding expression, iterating over the rightmost expression first and working its way left:
user> (for [x '(1 2 3) y '(\a \b \c)] (str x y))
("1a" "1b" "1c" "2a" "2b" "2c" "3a" "3b" "3c")
There's also no break or continue to exit prematurely.
Overuse of structs
I come from an OOPish background so when I started Clojure my brain was still thinking in terms of objects. I found myself modeling everything as a struct because its grouping of "members", however loose, made me feel comfortable. In reality, structs should mostly be considered an optimization; Clojure will share the keys and some lookup information to conserve memory. You can further optimize them by defining accessors to speed up the key lookup process.
Overall you don't gain anything from using a struct over a map except for performance, so the added complexity might not be worth it.
Using unsugared BigDecimal constructors
I needed a lot of BigDecimals and was writing ugly code like this:
(let [foo (BigDecimal. "1") bar (BigDecimal. "42.42") baz (BigDecimal. "24.24")]
when in fact Clojure supports BigDecimal literals by appending M to the number:
(= (BigDecimal. "42.42") 42.42M) ; true
Using the sugared version cuts out a lot of the bloat. In the comments, twils mentioned that you can also use the bigdec and bigint functions to be more explicit, yet remain concise.
Using the Java package naming conversions for namespaces
This isn't actually a mistake per se, but rather something that goes against the idiomatic structure and naming of a typical Clojure project. My first substantial Clojure project had namespace declarations - and corresponding folder structures - like this:
(ns com.14clouds.myapp.repository)
which bloated up my fully-qualified function references:
(com.14clouds.myapp.repository/load-by-name "foo")
To complicate things even more, I used a standard Maven directory structure:
|-- src/
| |-- main/
| | |-- java/
| | |-- clojure/
| | |-- resources/
| |-- test/
...
which is more complex than the "standard" Clojure structure of:
|-- src/
|-- test/
|-- resources/
which is the default of Leiningen projects and Clojure itself.
Maps utilize Java's equals() rather than Clojure's = for key matching
Originally reported by chouser on IRC, this usage of Java's equals() leads to some unintuitive results:
user> (= (int 1) (long 1))
true
user> ({(int 1) :found} (int 1) :not-found)
:found
user> ({(int 1) :found} (long 1) :not-found)
:not-found
Since both Integer and Long instances of 1 are printed the same by default, it can be difficult to detect why your map isn't returning any values. This is especially true when you pass your key through a function which, perhaps unbeknownst to you, returns a long.
It should be noted that using Java's equals() instead of Clojure's = is essential for maps to conform to the java.util.Map interface.
I'm using Programming Clojure by Stuart Halloway, Practical Clojure by Luke VanderHart, and the help of countless Clojure hackers on IRC and the mailing list to help along my answers.
Forgetting to force evaluation of lazy seqs
Lazy seqs aren't evaluated unless you ask them to be evaluated. You might expect this to print something, but it doesn't.
user=> (defn foo [] (map println [:foo :bar]) nil)
#'user/foo
user=> (foo)
nil
The map is never evaluated, it's silently discarded, because it's lazy. You have to use one of doseq, dorun, doall etc. to force evaluation of lazy sequences for side-effects.
user=> (defn foo [] (doseq [x [:foo :bar]] (println x)) nil)
#'user/foo
user=> (foo)
:foo
:bar
nil
user=> (defn foo [] (dorun (map println [:foo :bar])) nil)
#'user/foo
user=> (foo)
:foo
:bar
nil
Using a bare map at the REPL kind of looks like it works, but it only works because the REPL forces evaluation of lazy seqs itself. This can make the bug even harder to notice, because your code works at the REPL and doesn't work from a source file or inside a function.
user=> (map println [:foo :bar])
(:foo
:bar
nil nil)
I'm a Clojure noob. More advanced users may have more interesting problems.
trying to print infinite lazy sequences.
I knew what I was doing with my lazy sequences, but for debugging purposes I inserted some print/prn/pr calls, temporarily having forgotten what it is I was printing. Funny, why's my PC all hung up?
trying to program Clojure imperatively.
There is some temptation to create a whole lot of refs or atoms and write code that constantly mucks with their state. This can be done, but it's not a good fit. It may also have poor performance, and rarely benefit from multiple cores.
trying to program Clojure 100% functionally.
A flip side to this: Some algorithms really do want a bit of mutable state. Religiously avoiding mutable state at all costs may result in slow or awkward algorithms. It takes judgement and a bit of experience to make the decision.
trying to do too much in Java.
Because it's so easy to reach out to Java, it's sometimes tempting to use Clojure as a scripting language wrapper around Java. Certainly you'll need to do exactly this when using Java library functionality, but there's little sense in (e.g.) maintaining data structures in Java, or using Java data types such as collections for which there are good equivalents in Clojure.
Lots of things already mentioned. I'll just add one more.
Clojure if treats Java Boolean objects always as true even if it's value is false. So if you have a java land function that returns a java Boolean value, make sure you do not check it directly
(if java-bool "Yes" "No")
but rather
(if (boolean java-bool) "Yes" "No").
I got burned by this with clojure.contrib.sql library that returns database boolean fields as java Boolean objects.
Keeping your head in loops.
You risk running out of memory if you loop over the elements of a potentially very large, or infinite, lazy sequence while keeping a reference to the first element.
Forgetting there's no TCO.
Regular tail-calls consume stack space, and they will overflow if you're not careful. Clojure has 'recur and 'trampoline to handle many of the cases where optimized tail-calls would be used in other languages, but these techniques have to be intentionally applied.
Not-quite-lazy sequences.
You may build a lazy sequence with 'lazy-seq or 'lazy-cons (or by building upon higher level lazy APIs), but if you wrap it in 'vec or pass it through some other function that realizes the sequence, then it will no longer be lazy. Both the stack and the heap can be overflown by this.
Putting mutable things in refs.
You can technically do it, but only the object reference in the ref itself is governed by the STM - not the referred object and its fields (unless they are immutable and point to other refs). So whenever possible, prefer to only immutable objects in refs. Same thing goes for atoms.
using loop ... recur to process sequences when map will do.
(defn work [data]
(do-stuff (first data))
(recur (rest data)))
vs.
(map do-stuff data)
The map function (in the latest branch) uses chunked sequences and many other optimizations. Also, because this function is frequently run, the Hotspot JIT usually has it optimized and ready to go with out any "warm up time".
Collection types have different behaviors for some operations:
user=> (conj '(1 2 3) 4)
(4 1 2 3) ;; new element at the front
user=> (conj [1 2 3] 4)
[1 2 3 4] ;; new element at the back
user=> (into '(3 4) (list 5 6 7))
(7 6 5 3 4)
user=> (into [3 4] (list 5 6 7))
[3 4 5 6 7]
Working with strings can be confusing (I still don't quite get them). Specifically, strings are not the same as sequences of characters, even though sequence functions work on them:
user=> (filter #(> (int %) 96) "abcdABCDefghEFGH")
(\a \b \c \d \e \f \g \h)
To get a string back out, you'd need to do:
user=> (apply str (filter #(> (int %) 96) "abcdABCDefghEFGH"))
"abcdefgh"
too many parantheses, especially with void java method call inside which results in NPE:
public void foo() {}
((.foo))
results in NPE from outer parantheses because inner parantheses evaluate to nil.
public int bar() { return 5; }
((.bar))
results in the easier to debug:
java.lang.Integer cannot be cast to clojure.lang.IFn
[Thrown class java.lang.ClassCastException]