How go macro transforms the code
(let [c1 (chan)
c2 (chan)]
(go (while true
(let [[v ch] (alts! [c1 c2])]
(println "Read" v "from" ch))))
(go (>! c1 "hi"))
(go (>! c2 "there")))
It is a state machine at the back. You can read more in The State Machines of core.async
Related
When I evaluate the following core.async clojurescript code I get an error: "Uncaught Error: <! used not in (go ...) block"
(let [chans [(chan)]]
(go
(doall (for [c chans]
(let [x (<! c)]
x)))))
What am I doing wrong here? It definitely looks like the <! is in the go block.
because go blocks can't cross function boundaries I tend to fall back on loop/recur for a lot of these cases. the (go (loop pattern is so common that it has a short-hand form in core.async that is useful in cases like this:
user> (require '[clojure.core.async :as async])
user> (async/<!! (let [chans [(async/chan) (async/chan) (async/chan)]]
(doseq [c chans]
(async/go (async/>! c 42)))
(async/go-loop [[f & r] chans result []]
(if f
(recur r (conj result (async/<! f)))
result))))
[42 42 42]
Why dont you use alts! from Core.Async?
This function lets you listen on multiple channels and know which channel you read from on each data.
For example:
(let [chans [(chan)]]
(go
(let [[data ch] (alts! chans)]
data)))))
You can ask of the channel origin too:
...
(let [slow-chan (chan)
fast-chan (chan)
[data ch] (alts! [slow-chan fast-chan])]
(when (= ch slow-chan)
...))
From the Docs:
Completes at most one of several channel operations. Must be called
inside a (go ...) block. ports is a vector of channel endpoints,
which can be either a channel to take from or a vector of
[channel-to-put-to val-to-put], in any combination. Takes will be
made as if by !. Unless
the :priority option is true, if more than one port operation is
ready a non-deterministic choice will be made. If no operation is
ready and a :default value is supplied, [default-val :default] will
be returned, otherwise alts! will park until the first operation to
become ready completes. Returns [val port] of the completed
operation, where val is the value taken for takes, and a
boolean (true unless already closed, as per put!) for put
Doumentation ref
(ns example.asyncq
(:require
[cljs.core.async :as async]
goog.async.AnimationDelay)
(:require-macros
[cljs.core.async.macros :refer [go go-loop]]))
(defn loop-fx-on-atm-when-pred?-is-true [atm fx pred?]
(let [c (async/chan)
f (goog.async.AnimationDelay. #(async/put! c :loop))
a (atom false)]
(add-watch
atm
:loop-fx-on-atm-when-pred?-is-true
(fn [_ _ _ _]
(when-not #a
(async/put! c :initial-loop))))
(go-loop []
(async/<! c)
(reset! a true)
(swap! atm #(if (pred? %) (do (.start f) (fx %)) %))
(reset! a false)
(recur))))
(def the-final-countdown (atom 4))
(loop-fx-on-atm-when-pred?-is-true
the-final-countdown
#(do (js/console.log %) (dec %))
pos?)
(swap! the-final-countdown inc)
;; prints "5" "4" "3" "2" "1", each on a seperate frame (hearbeat)
The purpose of loop-fx-on.... is to register fx to be called on atm (per heartbeat) whenever pred? #atm is true.
I would like the following properties:
This code should only execute when pred? might return true. Otherwise, the state machine sits idle.
This should not register more than once per frame. Modifications of the atom after the AnimationDelay has been called but before the next actual frame are effectively noops. That is, the atom is modified, but it does not cause f to be called again.
From the point of view of the person using the atom, all they need do is modify the atom. They do not need to concern themselves with registering or unregistering the atom for AnimationFrame processing, it just works. The atom will continue to be processed every frame until pred? returns false.
My biggest gripe with the above code is (reset! a true) and (reset! a false) around the swap!. That is really ugly, and in a non single threaded environment would probably be a bug. Is there any way to improve this code so that I don't end up using a ?
My second concern is that 2 is not being met. Currently, if you modify the same atom twice between a frame, it will actually result in 2 calls to (f); or 2 callbacks on the next frame. I could use another atom to record whether I have actually registered for this frame or not, but this is already getting messy. Anything better?
Something like this might work for you:
(defn my-fn [a f pred]
(let [pending? (atom false)
f (fn []
(reset! pending? false)
(f #a))]
(add-watch a (gensym)
(fn [_k _a _old-val new-val]
(when (and (not #pending?) (pred new-val))
(reset! pending? true)
(if (exists? js/requestAnimationFrame)
(js/requestAnimationFrame f)
(js/setTimeout f 16)))))))
What's best way in clojure to implement something like an actor or agent (asynchronously updated, uncoordinated reference) that does the following?
gets sent messages/data
executes some function on that data to obtain new state; something like (fn [state new-msgs] ...)
continues to receive messages/data during that update
once done with that update, runs the same update function against all messages that have been sent in the interim
An agent doesn't seem quite right here. One must simultaneously send function and data to agents, which doesn't leave room for a function which operates on all data that has come in during the last update. The goal implicitly requires a decoupling of function and data.
The actor model seems generally better suited in that there is a decoupling of function and data. However, all actor frameworks I'm aware of seem to assume each message sent will be processed separately. It's not clear how one would turn this on it's head without adding extra machinery. I know Pulsar's actors accept a :lifecycle-handle function which can be used to make actors do "special tricks" but there isn't a lot of documentation around this so it's unclear whether the functionality would be helpful.
I do have a solution to this problem using agents, core.async channels, and watch functions, but it's a bit messy, and I'm hoping there is a better solution. I'll post it as a solution in case others find it helpful, but I'd like to see what other's come up with.
Here's the solution I came up with using agents, core.async channels, and watch functions. Again, it's a bit messy, but it does what I need it to for now. Here it is, in broad strokes:
(require '[clojure.core.async :as async :refer [>!! <!! >! <! chan go]])
; We'll call this thing a queued-agent
(defprotocol IQueuedAgent
(enqueue [this message])
(ping [this]))
(defrecord QueuedAgent [agent queue]
IQueuedAgent
(enqueue [_ message]
(go (>! queue message)))
(ping [_]
(send agent identity)))
; Need a function for draining a core async channel of all messages
(defn drain! [c]
(let [cc (chan 1)]
(go (>! cc ::queue-empty))
(letfn
; This fn does all the hard work, but closes over cc to avoid reconstruction
[(drainer! [c]
(let [[v _] (<!! (go (async/alts! [c cc] :priority true)))]
(if (= v ::queue-empty)
(lazy-seq [])
(lazy-seq (cons v (drainer! c))))))]
(drainer! c))))
; Constructor function
(defn queued-agent [& {:keys [buffer update-fn init-fn error-handler-builder] :or {:buffer 100}}]
(let [q (chan buffer)
a (agent (if init-fn (init-fn) {}))
error-handler-fn (error-handler-builder q a)]
; Set up the queue, and watcher which runs the update function when there is new data
(add-watch
a
:update-conv
(fn [k r o n]
(let [queued (drain! q)]
(when-not (empty? queued)
(send a update-fn queued error-handler-fn)))))
(QueuedAgent. a q)))
; Now we can use these like this
(def a (queued-agent
:init-fn (fn [] {:some "initial value"})
:update-fn (fn [a queued-data error-handler-fn]
(println "Receiving data" queued-data)
; Simulate some work/load on data
(Thread/sleep 2000)
(println "Done with work; ready to queue more up!"))
; This is a little warty at the moment, but closing over the queue and agent lets you requeue work on
; failure so you can try again.
:error-handler-builder
(fn [q a] (println "do something with errors"))))
(defn -main []
(doseq [i (range 10)]
(enqueue a (str "data" i))
(Thread/sleep 500) ; simulate things happening
; This part stinks... have to manually let the queued agent know that we've queued some things up for it
(ping a)))
As you'll notice, having to ping the queued-agent here every time new data is added is pretty warty. It definitely feels like things are being twisted out of typical usage.
Agents are the inverse of what you want here - they are a value that gets sent updating functions. This easiest with a queue and a Thread. For convenience I am using future to construct the thread.
user> (def q (java.util.concurrent.LinkedBlockingDeque.))
#'user/q
user> (defn accumulate
[summary input]
(let [{vowels true consonents false}
(group-by #(contains? (set "aeiouAEIOU") %) input)]
(-> summary
(update-in [:vowels] + (count vowels))
(update-in [:consonents] + (count consonents)))))
#'user/accumulate
user> (def worker
(future (loop [summary {:vowels 0 :consonents 0} in-string (.take q)]
(if (not in-string)
summary
(recur (accumulate summary in-string)
(.take q))))))
#'user/worker
user> (.add q "hello")
true
user> (.add q "goodbye")
true
user> (.add q false)
true
user> #worker
{:vowels 5, :consonents 7}
I came up with something closer to an actor, inspired by Tim Baldridge's cast on actors (Episode 16). I think this addresses the problem much more cleanly.
(defmacro take-all! [c]
`(loop [acc# []]
(let [[v# ~c] (alts! [~c] :default nil)]
(if (not= ~c :default)
(recur (conj acc# v#))
acc#))))
(defn eager-actor [f]
(let [msgbox (chan 1024)]
(go (loop [f f]
(let [first-msg (<! msgbox) ; do this so we park efficiently, and only
; run when there are actually messages
msgs (take-all! msgbox)
msgs (concat [first-msg] msgs)]
(recur (f msgs)))))
msgbox))
(let [a (eager-actor (fn f [ms]
(Thread/sleep 1000) ; simulate work
(println "doing something with" ms)
f))]
(doseq [i (range 20)]
(Thread/sleep 300)
(put! a i)))
;; =>
;; doing something with (0)
;; doing something with (1 2 3)
;; doing something with (4 5 6)
;; doing something with (7 8 9 10)
;; doing something with (11 12 13)
I'd like to use memoize for a function that uses core.async and <! e.g
(defn foo [x]
(go
(<! (timeout 2000))
(* 2 x)))
(In the real-life, it could be useful in order to cache the results of server calls)
I was able to achieve that by writing a core.async version of memoize (almost the same code as memoize):
(defn memoize-async [f]
(let [mem (atom {})]
(fn [& args]
(go
(if-let [e (find #mem args)]
(val e)
(let [ret (<! (apply f args))]; this line differs from memoize [ret (apply f args)]
(swap! mem assoc args ret)
ret))))))
Example of usage:
(def foo-memo (memoize-async foo))
(go (println (<! (foo-memo 3)))); delay because of (<! (timeout 2000))
(go (println (<! (foo-memo 3)))); subsequent calls are memoized => no delay
I am wondering if there are simpler ways to achieve the same result.
**Remark: I need a solution that works with <!. For <!!, see this question: How to memoize a function that uses core.async and blocking channel read? **
You can use the built in memoize function for this. Start by defining a method that reads from a channel and returns the value:
(defn wait-for [ch]
(<!! ch))
Note that we'll use <!! and not <! because we want this function block until there is data on the channel in all cases. <! only exhibits this behavior when used in a form inside of a go block.
You can then construct your memoized function by composing this function with foo, like such:
(def foo-memo (memoize (comp wait-for foo)))
foo returns a channel, so wait-for will block until that channel has a value (i.e. until the operation inside foo finished).
foo-memo can be used similar to your example above, except you do not need the call to <! because wait-for will block for you:
(go (println (foo-memo 3))
You can also call this outside of a go block, and it will behave like you expect (i.e. block the calling thread until foo returns).
This was a little trickier than I expected. Your solution isn't correct, because when you call your memoized function again with the same arguments, sooner than the first run finishes running its go block, you will trigger it again and get a miss. This is often the case when you process lists with core.async.
The one below uses core.async's pub/sub to solve this (tested in CLJS only):
(def lookup-sentinel #?(:clj ::not-found :cljs (js-obj))
(def pending-sentinel #?(:clj ::pending :cljs (js-obj))
(defn memoize-async
[f]
(let [>in (chan)
pending (pub >in :args)
mem (atom {})]
(letfn
[(memoized [& args]
(go
(let [v (get #mem args lookup-sentinel)]
(condp identical? v
lookup-sentinel
(do
(swap! mem assoc args pending-sentinel)
(go
(let [ret (<! (apply f args))]
(swap! mem assoc args ret)
(put! >in {:args args :ret ret})))
(<! (apply memoized args)))
pending-sentinel
(let [<out (chan 1)]
(sub pending args <out)
(:ret (<! <out)))
v))))]
memoized)))
NOTE: it probably leaks memory, subscriptions and <out channels are not closed
I have used this function in one of my projects to cache HTTP calls. The function caches results for a given amount of time and uses a barrier to prevent executing the function multiple times when the cache is "cold" (due to the context switch inside the go block).
(defn memoize-af-until
[af ms clock]
(let [barrier (async/chan 1)
last-return (volatile! nil)
last-return-ms (volatile! nil)]
(fn [& args]
(async/go
(>! barrier :token)
(let [now-ms (.now clock)]
(when (or (not #last-return-ms) (< #last-return-ms (- now-ms ms)))
(vreset! last-return (<! (apply af args)))
(vreset! last-return-ms now-ms))
(<! barrier)
#last-return)))))
You can test that it works properly by setting the cache time to 0 and observe that the two function calls take approximately 10 seconds. Without the barrier the two calls would finish at the same time:
(def memo (memoize-af-until #(async/timeout 5000) 0 js/Date))
(async/take! (memo) #(println "[:a] Finished"))
(async/take! (memo) #(println "[:b] Finished"))
How to evaluate foo when mouse is down and moving using core/async?
Whilst attempting to learn the concepts behind core/async I have worked through the ClojureScript 101 tutorial (but I suspect this question applies to clojure to).
I create a channel where mouse movement events are placed using the following:
;; helper to get a channel where a dom event type will be put
(defn listen [el type]
(let [out (chan)]
(events/listen el type
(fn [e] (put! out e)))
out))
;; create a channel for mouse moves, take the values
;; and pass them to the console
(let [moves (listen (dom/getElement "canvas") "mousemove")]
(go (while true
(foo (<! moves)))))
This works, foo is evaluated when the mouse moves. But how can this be done only when the mouse is down?
My first guess would be to use an atom and two new channels for mousedown and mouseup. Then update atom with the mouse state and test against this in the go block. But I suspect this is wrong due to the use of an atom; hence the question.
Answering my own question, here is the closest I have got. Appears to work.
;; Util for create DOM channels
(defn listen [el type]
"Takes a DOM element and an event type. Returns a channel for the event"
;; out is a new channel
(let [out (chan (sliding-buffer 1))]
;; attach an event listener
(events/listen el type
;; the handler/callback of the listener takes the
;; event and put! in on the channel. We are using
;; put because we are not in a go block
(fn [e] (put! out e)))
;; return the channel
out))
(def canvas-el (dom/getElement "canvas"))
(def mouse-up (listen canvas-el "mouseup"))
(def mouse-down (listen canvas-el "mousedown"))
(def mouse-move (listen canvas-el "mousemove"))
(go (while true
(<! mouse-down)
(loop []
(let [[v ch] (alts! [mouse-move mouse-up])]
(when (= ch mouse-move)
(do
(.log js/console "move" (.-clientX v) (.-clientY v))
(recur)))))))