How can I make the deref of this promise throw an Exception, in the same way a future can cause an Exception if the body of the future throws one?
(let [p (promise)]
(something-that-could-deliver-an-error p)
#p) ; should explode with the delivered Exception
I'm currently considering doing this by delivering a fn to the promise, but I suspect there's a more idiomatic way to do this.
Background: I'm running multiple futures concurrently. If any of them error, I want to cancel all the other futures immediately and output the error. Maybe there's a better way?
Clojure's promises don't have a separate concept of "success" and "failure", rather, they only distinguish whether a promise has been realized. In other words, there's no way to get a Clojure core promise to throw a custom exception when you deference it.
If you're OK with going outside the standard library, you might try using deferred objects from Manifold instead:
http://aleph.io/manifold/deferreds.html
A deferred object can be fulfilled by either the success! or error! functions. The former behaves like deliver. The latter lets you fulfill the promise with an exception, which is thrown if the deferred is deref'd, but can also be "caught" by some of Manifold's other control-flow functions.
user=> (require '[manifold.deferred :as d])
nil
user=> (def my-deferred-value (d/deferred))
#'user/my-deferred-value
user=> (d/error! my-deferred-value (ex-info "Error!" {:failed true}))
true
user=> #my-deferred-value
ExceptionInfo Error! clojure.core/ex-info (core.clj:4617)
Related
In Clojure, promise objects implement clojure.lang.IFn, and invoking the promise with a single argument fulfills the promise. That's how deliver is implemented:[source]
(defn deliver
"Delivers the supplied value to the promise, releasing any pending
derefs. A subsequent call to deliver on a promise will have no effect."
{:added "1.1"
:static true}
[promise val] (promise val))
If (deliver x y) is just a level of indirection over (x y), why use deliver at all?
I'm assuming this is supposed to help disambiguate promises from functions in some way—but the same argument could apply to using some promise-specific function to read from a promise rather than using the general deref function for that.
It's syntactic sugar to make code like this look nice:
(-> url
download
extract-value
(deliver consumer)
The deliver function used to have the behavior of ensuring that if you where the second caller to it an exception would be thrown. It was changed in 2011 and now the later calls are simply ignored.
Promises always had the same behavior if called as a function and if called from deliver, the function deliver only filled the roll of making something a little different look a little different. These days T would still use it to communicate with my future self
deref is a lot less general than the function-call mechanism. When you see something deref'd, you know it is fetching some value from somewhere. When you see (f x) you really have no idea what is happening if you don't already know what f is: it could do anything at all. deliver gives you more context.
I have the following piece of code
(def number (ref 0))
(dosync (future (alter number inc))) ; A
(future (dosync (alter number inc))) ; B
The 2nd one succeeds, but the first one fails with no transaction is running. But it is wrapped inside a dosync right?
Does clojure remember opening of transactions based on which thread it was created in ?
You are correct. The whole purpose of dosync is to begin a transaction in the current thread. The future runs its code in a new thread, so the alter in case A is not inside of a dosync for its thread.
For case B, the alter and dosync are both in the same (new) thread, so there is no problem.
There are multiple reasons this doesn't work. As Alan Thompson writes, transactions are homed to a single thread, and so when you create a new thread you lose your transaction.
Another problem is the dynamic scope of dosync. The same problem would arise if you wrote
((dosync #(alter number inc)))
Here we create a function inside of the dosync scope, and let that function be the result of the dosync. Then we call the function from outside of the dosync block, but of course the transaction is no longer running.
That's very similar to what you're doing with future: future creates a function and then executes it on a new thread, returning a handle you can use to inspect the progress of that thread. Even if cross-thread transactions were allowed, you would have a race condition here: does the dosync block close its transaction before or after the alter call in the future is executed?
I'm using
(def f
(future
(while (not (Thread/interrupted))
(function-to-run))))
(Thread/sleep 100)
(future-cancel f)
to cancel my code after a specified amount of time (100ms).
The problem is, I need to cancel the already running function 'function-to-run' as well, it is important that it really stops executing that function after 100ms.
Can I somehow propagate the interrupted signal to the function?
The function is not third-party, I wrote it myself.
The basic thing to note here is: you cannot safely kill a thread without its own cooperation. Since you are the owner of the function you wish to be able to kill prematurely, it makes sense to allow the function to cooperate and die gracefully and safely.
(defn function-to-run
[]
(while work-not-done
(if-not (Thread/interrupted)
; ... do your work
(throw (InterruptedException. "Function interrupted...")))))
(def t (Thread. (fn []
(try
(while true
(function-to-run))
(catch InterruptedException e
(println (.getMessage e)))))))
To begin the thread
(.start t)
To interrupt it:
(.interrupt t)
Your approach was not sufficient for your use case because the while condition was checked only after control flow returned from function-to-run, but you wanted to stop function-to-run during its execution. The approach here is only different in that the condition is checked more frequently, namely, every time through the loop in function-to-run. Note that instead of throwing an exception from function-to-run, you could also return some value indicating an error, and as long as your loop in the main thread checks for this value, you don't have to involve exceptions at all.
If your function-to-run doesn't feature a loop where you can perform the interrupted check, then it likely is performing some blocking I/O. You may not be able to interrupt this, though many APIs will allow you to specify a timeout on the operation. In the worst case, you can still perform intermittent checks for interrupted in the function around your calls. But the bottom line still applies: you cannot safely forcibly stop execution of code running in the function; it should yield control cooperatively.
Note:
My original answer here involved presenting an example in which java's Thread.stop() was used (though strongly discouraged). Based on feedback in the comments, I revised the answer to the one above.
I have to following code for automation, the function accepts a unique number, and kick off a firefox. I could kick off multiple threads, each thread with a unique x passing to the function, so the function will be executed concurrently. Then will the local atom current-page be visible to other threads? if visible, then the reset! could set the atom an expected value from another thread
(defn consumer-scanning-pages [x]
(while true
(let [driver (get-firefox x)
current-page (atom 0)]
....
(reset! current-page ..)
)))
The atom will be visible to those threads you explicitly pass it to, to any further threads that those threads pass it to etc. It is no different in this respect to any other value that you may or may not pass around.
"Passing the atom to a thread" can be as simple as referring to an in-scope local it is stored in within the body of a Clojure thread-launching form:
(let [a (atom :foo)]
;; dereferencing the future object representing an off-thread computation
#(future
;; dereferencing the atom on another thread
#a))
;;= :foo
Merely creating an atom doesn't make it available to code that it is not explicitly made available to, and this is also true of code that happens to run on the thread that originally created the atom. (Consider a function that creates an atom, but never stores it in any externally visible data structures and ultimately returns an unrelated value. The atom it creates will become eligible for GC when the function returns at the latest; it will not be visible to any other code, on the same or any other thread.) Again, this is also the case with all other values.
It will not. You are creating a new atom each time that you call the function.
If you want a shared atom, just pass the atom as a param to consumer-scanning-pages
I'm having trouble reproducing a bug where I get a null pointer exception when I call first on a PersistentArrayMap. If I copy and paste the map and call first it works, but when the map is in a ref it doesn't work. Is this some weird behaviour related to laziness (not my own) ?
Update: I cannot produce an example that fails every time, so I am forcing evaluation of everything now and it seems to work
my general game plan when i suspect that i may have been bitten by the lazy bug is to
put doseq around everything until the point of failure starts changing.
ps: pasting a stack trace would help give better answers.
Calling first can never cause an NPE, so the problem is elsewhere. My guess is you tried to deref a ref which was nil:
user=> (first #nil)
java.lang.NullPointerException (NO_SOURCE_FILE:0)