I'm learning Clojure, and I find difficult to understand where a specific compiler error happens:
java.lang.ClassCastException: java.lang.Long cannot be cast to
clojure.lang.IPersistentCollection, compiling:(fwpd/core.clj:100:1)
Line 100 is just:
(fib-seq3 5)
So it says nothing, because in fact the error is in the fib-seq3 function (parameters to a "conj" call are inverted, see below).
Is this normal? No way to know where an error is???
Just for reference, here's the code (again, I know where the error is; I just don't understand how was I supposed to find it, given that the message doesn't tell me at which line it happens):
(defn fib-seq3
([to]
(fib-seq3 [] 0 1 0 to))
([coll a b k to]
(if (= k to)
coll
(fib-seq3 (conj b coll) b (+ a b) (inc k) to)))
(fib-seq3 5)
Stack traces in Clojure suck. In fact, error messages were rated by Clojure community as the top priority area for improvements, as well as Clojure most frustrating part.
This problem is not new. There was no considerable improvements in Clojure stack traces for quite a long time. But Clojure team is fully aware of this situation, so we could hope for improvements.
To better understand Clojure stack traces try reading Clojure Stack Traces for the Uninitiated. Though the article is somewhat old, it's still relevant.
In short, you should look for so-called "cause trace", which is a second part of any Clojure stack trace and starts with "Caused by" phrase.
The problem is that I was using REPL (Vim+Fireplace) to execute the code. Executing using lein repl fixed the problem.
#Leonid #amalloy:
(.printStackTrace *e)
gives the proper stacktrace in the REPL (even from inside Fireplace, using "cqp" which gives the REPL prompt), so thank you very much for the comment (didn't know that!)
Related
My application evaluates quoted expressions received from remote clients. Overtime, my system's memory increases and eventually it crashes. What I've found out is that:
When I execute the following code from Clojure's nrepl in a docker container:
(dotimes [x 1000000] ; or some arbitrary large number
(eval '(+ 1 1)))
the container's memory usage keeps rising until it hits the limit, at which point the system will crash.
How do I get around this problem?
There's another thread mentioning this behavior. One of the answers mentions the use of tools.reader, which still uses eval if I need code execution, leading to the same problem.
There's no easy way to get around this as each call to eval creates a new class, even though the form you're evaluating is exactly the same. By itself, JVM will not get rid of new classes.
There are two ways to circumvent this:
Stop using eval altogether (by e.g. creating your own DSL or your own version of eval with limited functionality) or at least use it less frequently, e.g. by batching the forms you need to evaluate
Unload already loaded classes - I haven't done it myself and it probably requires a lot of work, but you can follow answers in this topic: Unloading classes in java?
I don't know how eval exactly works internally,
but based on my observations I don't think your conclusions are correct and also Eugene's remark "By itself, JVM will not get rid of new classes" seems to be false.
I run your sample with -Xmx256m and it went fine.
(time (dotimes [x 1000000] ; or some arbitrary large number
(eval '(+ 1 1))))
;; "Elapsed time: 529079.032449 msecs"
I checked the question you linked and they say it's Metaspace that is growing not heap.
So I observed the Metaspace and it's usage is growing but also shrinking.
You can find my experiment here together with some graphs from JMC console: https://github.com/jumarko/clojure-experiments/commit/824f3a69019840940eaa88c3427515bcba33c4d2
Note: To run this experiment, I've used JDK 17.0.2 on macOS
for the past few weeks I’ve been working with “core.async” in Clojure and Clojurescript, wondering if it is a good idea to use outside bound symbols inside a go as there is a pool of threads and possibly any of these could use the bound symbols. It works evaluating it, but macroexpansion does not work - see the following snippets
I guess it should work without problems. The x is immutable and wont change by concurrent threads. Using an atom as x for mutable data should also work cause its an atom XD E.g. an object ref would not work of course or could make problems!
(let [x 5]
(clojure.core.async/go
(println x)))
;; => 5
;; nil
(clojure.walk/macroexpand-all
'(let [x 5]
(clojure.core.async/go
(println x))))
;; => Syntax error macroexpanding clojure.core.async/go at (your_project.cljc:93:3).
;; Could not resolve var: x
It seems to work, but is it a bad idea and why?
Anyone can explain why macroexpansion does not work?
macroexpand-all is not a high-fidelity expander. It uses a basic process that works for simple macros, but it doesn't do everything the actual compiler does. Notably, it does not curate the &env map that bindings should introduce. I assume core.async needs to look at &env to determine whether a binding is local or a var.
So, you shouldn't expect macroexpand-all to work here, but there's nothing wrong with writing code of this sort.
I've been developing in Java and Perl for a long time but wanted to learn something new so I've begun looking into clojure. One of the first things I've tried was a solution for the Towers of Hanoi puzzle but I've been getting strange behavior on my pretty-print function. Basically, my for loop is never entered when I run it with 'lein run' but it seems to work fine when I run it from the repl. Here's a stripped down example:
(ns test-app.core
(:gen-class))
(defn for-print
"Print the same thing 3 times"
[ p-string ]
(println (str "Checkpoint: " p-string))
(for
[x [1 2 3]]
(printf "FOR: %s\n" p-string)
))
(defn -main
"I don't do a whole lot ... yet."
[& args]
(for-print "Haldo wurld!"))
When I run this with 'lein run' I only see the output from the "Checkpoint" println. If I remove that line I get no output at all. But, if I run 'lein repl' and then type (-main) it prints the string 3 times, as expected:
test-app.core=> (-main)
Checkpoint: Haldo wurld!
(FOR: Haldo wurld!
FOR: Haldo wurld!
FOR: Haldo wurld!
nil nil nil)
test-app.core=>
What's going on here? I have a feeling that I'm approaching this the wrong way, trying to use my past Perl/Java mentality to write clojure. What would be an idiomatic way to run the same task a set number of times?
The for loop is returning a lazy sequence that is evaluated only as needed.
When you run the program inside the repl the for result is realized in order to display the result on screen.
But when you run using lein run result is never used so the collection is not realized.
You have a couple alternatives:
1) use doall outside the for loop for force lazy sequence realization
Ex:
(defn for-print
"Print the same thing 3 times"
[ p-string ]
(println (str "Checkpoint: " p-string))
(doall (for
[x [1 2 3]]
(printf "FOR: %s\n" p-string))))
2) since you're only printing which is a side effect and not really creating a collection you can use doseq.
Ex:
(defn for-print
"Print the same thing 3 times"
[ p-string ]
(println (str "Checkpoint: " p-string))
(doseq [x [1 2 3]]
(printf "FOR: %s\n" p-string)))
Clojure for is not an imperative loop (you should avoid thinking about loops in Clojure at all), it's a list comprehension, which returns lazy sequence. It's made for creating a sequence, not for printing anything. You can realize it, and make it work, but it is bad way to go.
As Guillermo said, you're looking for doseq macro, which is designed for side effects. It's probably the most idiomatic Clojure in your particular case.
From my point of view the most similar to imperative loops construction in Clojure is tail recursion, made with loop/recur. Still it's rather low level construct in Clojure and certainly should not be used in a imperative loop manner. Have a better look into functional programming principles, as well as Clojure core functions. Trying to transfer Java/Perl thinking to Clojure may harm you.
The other answers are correct, and provide details. I want to add some higher-level clarification that might be helpful. In most languages, "for" means "for such and conditions, perform such and such actions (which could be of an arbitrary type, including side-effects)." This kind of thing can be done in Clojure, and I have seen experienced Clojure programmers do it when it's useful and convenient. However, using loops with side-effects works usually against the strengths of the language. So in Clojure the word "for" has a different meaning in most languages: Generating (lazy) sequences. It means something like "For these inputs, when/while/etc. they meet such and such conditions, bind then temporarily to these variables, generate a (lazy) sequence by processing each set of values in such and such a way."
I'm trying to use create a Clojure seq from some iterative Java library code that I inherited. Basically what the Java code does is read records from a file using a parser, sends those records to a processor and returns an ArrayList of result. In Java this is done by calling parser.readData(), then parser.getRecord() to get a record then passing that record into processor.processRecord(). Each call to parser.readData() returns a single record or null if there are no more records. Pretty common pattern in Java.
So I created this next-record function in Clojure that will get the next record from a parser.
(defn next-record
"Get the next record from the parser and process it."
[parser processor]
(let [datamap (.readData parser)
row (.getRecord parser datamap)]
(if (nil? row)
nil
(.processRecord processor row 100))))
The idea then is to call this function and accumulate the records into a Clojure seq (preferably a lazy seq). So here is my first attempt which works great as long as there aren't too many records:
(defn datamap-seq
"Returns a lazy seq of the records using the given parser and processor"
[parser processor]
(lazy-seq
(when-let [records (next-record parser processor)]
(cons records (datamap-seq parser processor)))))
I can create a parser and processor, and do something like (take 5 (datamap-seq parser processor)) which gives me a lazy seq. And as expected getting the (first) of that seq only realizes one element, doing count realizes all of them, etc. Just the behavior I would expect from a lazy seq.
Of course when there are a lot of records I end up with a StackOverflowException. So my next attempt was to use loop-recur to do the same thing.
(defn datamap-seq
"Returns a lazy seq of the records using the given parser and processor"
[parser processor]
(lazy-seq
(loop [records (seq '())]
(if-let [record (next-record parser processor)]
(recur (cons record records))
records))))
Now using this the same way and defing it using (def results (datamap-seq parser processor)) gives me a lazy seq and doesn't realize any elements. However, as soon as I do anything else like (first results) it forces the realization of the entire seq.
Can anyone help me understand where I'm going wrong in the second function using loop-recur that causes it to realize the entire thing?
UPDATE:
I've looked a little closer at the stack trace from the exception and the stack overflow exception is being thrown from one of the Java classes. BUT it only happens when I have the datamap-seq function like this (the one I posted above actually does work):
(defn datamap-seq
"Returns a lazy seq of the records using the given parser and processor"
[parser processor]
(lazy-seq
(when-let [records (next-record parser processor)]
(cons records (remove empty? (datamap-seq parser processor))))))
I don't really understand why that remove causes problems, but when I take it out of this funciton it all works right (I'm doing the removal of empty lists somewhere else now).
loop/recur loops within the loop expression until the recursion runs out. adding a lazy-seq around it won't prevent that.
Your first attempt with lazy-seq / cons should already work as you want, without stack overflows. I can't spot right now what the problem with it is, though it might be in the java part of the code.
I'll post here addition to Joost's answer. This code:
(defn integers [start]
(lazy-seq
(cons
start
(integers (inc start)))))
will not throw StackOverflowExceptoin if I do something like this:
(take 5 (drop 1000000 (integers)))
EDIT:
Of course better way to do it would be to (iterate inc 0). :)
EDIT2:
I'll try to explain a little how lazy-seq works. lazy-seq is a macro that returns seq-like object. Combined with cons that doesn't realize its second argument until it is requested you get laziness.
Now take a look at how LazySeq class is implemented. LazySeq.sval triggers computation of the next value which returns another instance of "frozen" lazy sequence. Method LazySeq.seq even better shows mechanics behind the concept. Notice that to fully realize sequence it uses while loop. It in itself means that stack trace use is limited to short function calls that return another instances of LazySeq.
I hope this makes any sense. I described what I could deduce from the source code. Please let me know if I made any mistakes.
Here's my implementation of Sieve of Erathosthene in Clojure (based on SICP lesson on streams):
(defn nats-from [n]
(iterate inc n))
(defn divide? [p q]
(zero? (rem q p)))
(defn sieve [stream]
(lazy-seq (cons (first stream)
(sieve (remove #(divide? (first stream) %)
(rest stream))))))
(def primes (sieve (nats-from 2)))
Now, it's all OK when i take first 100 primes:
(take 100 primes)
But, if i try to take first 1000 primes, program breaks because of stack overflow.
I'm wondering if is it possible to change somehow function sieve to become tail-recursive and, still, to preserve "streamnes" of algorithm?
Any help???
Firstly, this is not the Sieve of Eratosthenes... see my comment for details.
Secondly, apologies for the close vote, as your question is not an actual duplicate of the one I pointed to... My bad.
Explanation of what is happening
The difference lies of course in the fact that you are trying to build an incremental sieve, where the range over which the remove call works is infinite and thus it's impossible to just wrap a doall around it. The solution is to implement one of the "real" incremental SoEs from the paper I seem to link to pretty frequently these days -- Melissa E. O'Neill's The Genuine Sieve of Eratosthenes.
A particularly beatiful Clojure sieve implementation of this sort has been written by Christophe Grand and is available here for the admiration of all who might be interested. Highly recommended reading.
As for the source of the issue, the questions I originally thought yours was a duplicate of contain explanations which should be useful to you: see here and here. Once again, sorry for the rash vote to close.
Why tail recursion won't help
Since the question specifically mentions making the sieving function tail-recursive as a possible solution, I thought I would address that here: functions which transform lazy sequences should not, in general, be tail recursive.
This is quite an important point to keep in mind and one which trips up many an unexperienced Clojure (or Haskell) programmer. The reason is that a tail recursive function of necessity only returns its value once it is "ready" -- at the very end of the computation. (An iterative process can, at the end of any particular iteration, either return a value or continue on to the next iteration.) In constrast, a function which generates a lazy sequence should immediately return a lazy sequence object which encapsulates bits of code which can be asked to produce the head or tail of the sequence whenever that's desired.
Thus the answer to the problem of stacking lazy transformations is not to make anything tail recursive, but to merge the transformations. In this particular case, the best performance can be obtained by using a custom scheme to fuse the filtering operations, based on priority queues or maps (see the aforementioned article for details).