I am learning clojure and try out its concurrency and effectiveness via a producer consumer example.
Did that and it felt pretty akward with having to use ref and deref and also watch and unwatch.
I tried to check other code snippet; but is there a better way of re factoring this other than using Java Condition await() and signal() methods along with Java Lock. I did not want to use anything in Java.
here is the code; i guess I would have made many mistakes here with my usage...
;a simple producer class
(ns my.clojure.producer
(:use my.clojure.consumer)
(:gen-class)
)
(def tasklist( ref (list) )) ;this is declared as a global variable; to make this
;mutable we need to use the fn ref
(defn gettasklist[]
(deref tasklist) ;we need to use deref fn to return the task list
)
(def testagent (agent 0)); create an agent
(defn emptytasklist[akey aref old-val new-val]
(doseq [item (gettasklist)]
(println(str "item is") item)
(send testagent consume item)
(send testagent increment item)
)
(. java.lang.Thread sleep 1000)
(dosync ; adding a transaction for this is needed to reset
(remove-watch tasklist "key123"); removing the watch on the tasklist so that it does not
; go to a recursive call
(ref-set tasklist (list ) ) ; we need to make it as a ref to reassign
(println (str "The number of tasks now remaining is=") (count (gettasklist)))
)
(add-watch tasklist "key123" emptytasklist)
)
(add-watch tasklist "key123" emptytasklist)
(defn addtask [task]
(dosync ; adding a transaction for this is needed to refset
;(println (str "The number of tasks before") (count (gettasklist)))
(println (str "Adding a task") task)
(ref-set tasklist (conj (gettasklist) task )) ; we need to make it as a ref to reassign
;(println (str "The number of tasks after") (count (gettasklist)))
)
)
Here is the consumer code
(ns my.clojure.consumer
)
(defn consume[c item]
(println "In the consume method:Item is " c item )
item
)
(defn increment [c n]
(println "parmeters are" c n)
(+ c n)
)
And here is the test code ( I have used maven to run clojure code and used NetBeans to edit as this is more familiar to me coming from Java - folder structure and pom at - https://github.com/alexcpn/clojure-evolve
(ns my.clojure.Testproducer
(:use my.clojure.producer)
(:use clojure.test)
(:gen-class)
)
(deftest test-addandcheck
(addtask 1)
(addtask 2)
(is(= 0 (count (gettasklist))))
(println (str "The number of tasks are") (count (gettasklist)))
)
If anybody can refactor this lightly so that I can read and understand the code then it will be great; Else I guess I will have to learn more
Edit -1
I guess using a global task list and making it available to other functions by de-referencing it (deref) and again making it mutable by ref is not the way in clojure;
So changing the addTask method to directly send the incoming tasks to an agent
(defn addtask [task]
(dosync ; adding a transaction for this is needed to refset
(println (str "Adding a task") task)
;(ref-set tasklist (conj (gettasklist) task )) ; we need to make it as a ref to reassign
(def testagent (agent 0)); create an agent
(send testagent consume task)
(send testagent increment task)
)
However when I tested it
(deftest test-addandcheck
(loop [task 0]
(when ( < task 100)
(addtask task)
(recur (inc task))))
(is(= 0 (count (gettasklist))))
(println (str "The number of tasks are") (count (gettasklist)))
)
after sometime the I am getting Java rejected execution exception -- This is fine if you do Java threads, because you take full control. But from clojure this looks odd, especially since you are not selecting the ThreadPool stratergy yourself
Adding a task 85
Exception in thread "pool-1-thread-4" java.util.concurrent.RejectedExecutionExce
ption
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution
(ThreadPoolExecutor.java:1759)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.jav
a:767)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.ja
va:658)
at clojure.lang.Agent$Action.execute(Agent.java:56)
at clojure.lang.Agent$Action.doRun(Agent.java:95)
at clojure.lang.Agent$Action.run(Agent.java:106)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExec
utor.java:885)Adding a task 86
Adding a task 87
I think modeling producer & consumer in clojure would be done the easiest (and most efficiently) using lamina channels.
I have made similar clojure program with producer/consumer pattern in Computing Folder Sizes Asynchronously.
I don't think that you really need to use Refs.
You have single tasklist that is mutable state that you want to alter synchronously. Changes are relatively quick and do not depend on any other external state, only the variable that comes in to consumer.
As far with atoms there is swap! function which can help you do the changes the way I understood you need.
You can look at my snippet Computing folder size i think it can at least show you proper use of atoms & agents. I played with it a lot, so it should be correct.
Hope it helps!
Regards,
Jan
I was looking at the Clojure examples for Twitter storm. He just used LinkedBlockingQueue. It's easy to use, concurrent and performs well. Sure, it lacks the sex-appeal of an immutable Clojure solution, but it will work well.
I've come across several use cases where I need the ability to:
strictly control the number of worker threads on both the producer and consumer side
control the maximum size of the "work queue" in order to limit memory consumption
detect when all work has been completed so that I can shut down the workers
I've found that the clojure built-in concurrency features (while amazingly simple and useful in their own right) make the first two bullet points difficult. lamina looks great, but I didn't see a way that it would address my particular use cases w/o the same sort of extra plumbing that I'd need to do around an implementation based on BlockingQueue.
So, I ended up hacking together a simple clojure library to try to solve my problems. It is basically just a wrapper around BlockingQueue that attempts to conceal some of the Java constructs and provide a higher-level producer-consumer API. I'm not entirely satisfied with the API yet; it'll likely evolve a bit further... but it's operational:
https://github.com/cprice-puppet/freemarket
Example usage:
(def myproducer (producer producer-work-fn num-workers max-work))
(def myconsumer (consumer myproducer consumer-work-fn num-workers max-results))
(doseq [result (work-queue->seq (:result-queue myconsumer))]
(println result))
Feedback / suggestions / contributions would be welcomed!
Related
I have a number of threads that need to access a collection of values, some of these values also need to be persisted to a database when any changes are made to them so that I don't lose the state in case of a server reboot etc.
Currently I'm using an atom to store these values, I have a set of functions which I call when something in the atom needs to change.
Inside these functions I'm also persisting data to a database before calling swap!, I chose this approach because I need to frequently read the values inside the atom, and it doesn't seem performant to open/close db connections every time I'm interested in one of the values.
the question:
Is this approach viable? I'm interested to know if anyone has had success implementing a similar solution or are there pitfalls I should be aware of?
Atoms are fine.
An alternative approach would be to use https://github.com/clojure/core.memoize or core.cached directly, like suggested by Stefan Kamphausen.
Approach:
Cache query results on the function level. This way you are sure that what you get back is exactly how the database would return it and not the way you would think it would serialize/deserialize.
Invalidate key/args after you have inserted/changed something in the database.
One benefit of this approach is that you can tweak the caching behavior: TTL, LRU, FIFO, etc.
Demo:
(require '[clojure.core.memoize :as memo])
;; suppose this is a real DB
(def db (atom {}))
(defn my-get [k]
;; expensive database call
(Thread/sleep 5000)
(get #db k))
(def my-get-cached
(memo/memo my-get))
(defn my-put
[k val]
(swap! db assoc k val)
(memo/memo-clear! my-get-cached [k]))
(comment
(my-put :foo "the value")
(my-get-cached :foo) ;; wait 5 seconds, "the value"
(my-get-cached :foo) ;; "the value", instantly
(my-put :bar "other-value")
(my-get-cached :foo) ;; "the value", still instantly
(my-get-cached :bar) ;; wait 5 seconds, "other value"
(my-put :foo "changed")
(my-get-cached :foo) ;; wait 5 seconds, "changed"
)
I am creating an agent for writing changes back to a database (as discussed in the agent-based write-behind log in Clojure Programming).
This is working fine, but I am struggling to create the agent late. I don't want to create it as a def, as I don't want it created when my tests are running (I see the pool starting up when the tests load the forms even though I use with-redefs to set a test value).
The code I started with is (using c3p0 pooling):
(def dba (agent (pool/make-datasource-spec (u/load-config "db.edn"))))
I tried making the agent nil, and investigated how I could set it in the main of my application, when it's really needed. But there doesn't seem to be an equivalent reset! function as there is with an atom. And the following code also failed saying the agent wasn't in error so didn't need restarting:
(when (not #dba)
(restart-agent dba (create-db-pool)))
So at the moment, I have an atom containing the agent, where I then do:
(def dba (atom nil))
;; ...
(defn init-db! []
(when (not #dba)
(log/info "Creating agent for pooled connection")
(reset! dba (agent (create-db-pool))))
But the very fact I'm having to do ##dba to reference the contents of the agent (i.e. the pool) makes me think this is insane.
Is there a more obvious way of creating the pool agent lazily?
delay is useful for cases like this. It causes the item to be created the first time it is read. so if your tests don't read it it will not be created.
user=> (def my-agent (delay (do (println "im making the agent now") (agent 0))))
#'user/my-agent
user=>
user=> #my-agent
im making the agent now
#object[clojure.lang.Agent 0x2cd73ca1 {:status :ready, :val 0}]
I am working on a Clojure / Jetty web service. I have a special url that I want to only be serviced one request at a time. If the url was requested, and before it returns, the url is requested again, I want to immediately return. So in more core.clj, where I defined my routes, I have something like this:
(def work-in-progress (ref false))
Then sometime later
(compojure.core/GET "/myapp/internal/do-work" []
(if #work-in-progress
"Work in Progress please try again later"
(do
(dosync
(ref-set work-in-progress true))
(do-the-work)
(dosync
(ref-set rebuild-in-progress false))
"Job completed Successfully")))
I have tried this on local Jetty server but I seem to be able to hit the url twice and double the work. What is a good pattern / way to implement this in Clojure in a threaded web server environment?
Imagine a following race condition for the solution proposed in the question.
Thread A starts to execute handler's body. #work-in-progress is false, so it enters the do expression. However, before it managed to set the value of work-in-progress to true...
Thread B starts to execute handler's body. #work-in-progress is false, so it enters the do expression.
Now two threads are executing (do-the-work) concurrently. That's not what we want.
To prevent this problem check and set the value of the ref in a dosync transaction.
(compojure.core/GET "/myapp/internal/do-work" []
(if (dosync
(when-not #work-in-progress
(ref-set work-in-progress true)))
(try
(do-the-work)
"Job completed Successfully"
(finally
(dosync
(ref-set work-in-progress false))))
"Work in Progress please try again later"))
Another abstraction which you might find useful in this scenario is an atom and compare-and-set!.
(def work-in-progress (atom false))
(compojure.core/GET "/myapp/internal/do-work" []
(if (compare-and-set! work-in-progress false true)
(try
(do-the-work)
"Job completed Successfully"
(finally
(reset! work-in-progress false)))
"Work in Progress please try again later"))
Actually this is the natural use case for a lock; in particular, a java.util.concurrent.locks.ReentrantLock.
The same pattern came up in my answer to an earlier SO question, Canonical Way to Ensure Only One Instance of a Service Is Running / Starting / Stopping in Clojure?; I'll repeat the relevant piece of code here:
(import java.util.concurrent.locks.ReentrantLock)
(def lock (ReentrantLock.))
(defn start []
(if (.tryLock lock)
(try
(do-stuff)
(finally (.unlock lock)))
(do-other-stuff)))
The tryLock method attempts to acquire the lock, returning true if it succeeds in doing so and false otherwise, not blocking in either case.
Consider queueing the access to the resource as well - in addition to getting an equivalent functionality to that of locks/flags, queues let you observe the resource contention, among other advantages.
How can I create a constantly running background process in Clojure? Is using "future" with a loop that never ends the right way?
You could just start a Thread with a function that runs forever.
(defn forever []
;; do stuff in a loop forever
)
(.start (Thread. forever))
If you don't want the background thread to block process exit, make sure to make it a daemon thread:
(doto
(Thread. forever)
(.setDaemon true)
(.start))
If you want some more finesse you can use the java.util.concurrent.Executors factory to create an ExecutorService. This makes it easy to create pools of threads, use custom thread factories, custom incoming queues, etc.
The claypoole lib wraps some of the work execution stuff up into a more clojure-friendly api if that's what you're angling towards.
My simple higher-order infinite loop function (using futures):
(def counter (atom 1))
(defn infinite-loop [function]
(function)
(future (infinite-loop function))
nil)
;; note the nil above is necessary to avoid overflowing the stack with futures...
(infinite-loop
#(do
(Thread/sleep 1000)
(swap! counter inc)))
;; wait half a minute....
#counter
=> 31
I strongly recommend using an atom or one of Clojures other reference types to store results (as per the counter in the example above).
With a bit of tweaking you could also use this approach to start/stop/pause the process in a thread-safe manner (e.g. test a flag to see if (function) should be executed in each iteration of the loop).
Maybe, or perhaps Lein-daemon? https://github.com/arohner/lein-daemon
I'm working with a messaging toolkit (it happens to be Spread but I don't know that the details matter). Receiving messages from this toolkit requires some boilerplate:
Create a connection to the daemon.
Join a group.
Receive one or more messages.
Leave the group.
Disconnect from the daemon.
Following some idioms that I've seen used elsewhere, I was able to cook up some working functions using Spread's Java API and Clojure's interop forms:
(defn connect-to-daemon
"Open a connection"
[daemon-spec]
(let [connection (SpreadConnection.)
{:keys [host port user]} daemon-spec]
(doto connection
(.connect (InetAddress/getByName host) port user false false))))
(defn join-group
"Join a group on a connection"
[cxn group-name]
(doto (SpreadGroup.)
(.join cxn group-name)))
(defn with-daemon*
"Execute a function with a connection to the specified daemon"
[daemon-spec func]
(let [daemon (merge *spread-daemon* daemon-spec)
cxn (connect-to-daemon daemon-spec)]
(try
(binding [*spread-daemon* (assoc daemon :connection cxn)]
(func))
(finally
(.disconnect cxn)))))
(defn with-group*
"Execute a function while joined to a group"
[group-name func]
(let [cxn (:connection *spread-daemon*)
grp (join-group cxn group-name)]
(try
(binding [*spread-group* grp]
(func))
(finally
(.leave grp)))))
(defn receive-message
"Receive a single message. If none are available, this will block indefinitely."
[]
(let [cxn (:connection *spread-daemon*)]
(.receive cxn)))
(Basically the same idiom as with-open, just that the SpreadConnection class uses disconnect instead of close. Grr. Also, I left out some macros that aren't relevant to the structural question here.)
This works well enough. I can call receive-message from inside of a structure like:
(with-daemon {:host "localhost" :port 4803}
(with-group "aGroup"
(... looping ...
(let [msg (receive-message)]
...))))
It occurs to me that receive-message would be cleaner to use if it were an infinite lazy sequence that produces messages. So, if I wanted to join a group and get messages, the calling code should look something like:
(def message-seq (messages-from {:host "localhost" :port 4803} "aGroup"))
(take 5 message-seq)
I've seen plenty of examples of lazy sequences without cleanup, that's not too hard. The catch is steps #4 and 5 from above: leaving the group and disconnecting from the daemon. How can I bind the state of the connection and group into the sequence and run the necessary cleanup code when the sequence is no longer needed?
This article describes how to do exactly that using clojure-contrib fill-queue. Regarding cleanup - the neat thing about fill-queue is that you can supply a blocking function that cleans itself up if there is an error or some condition reached. You can also hold a reference to the resource to control it externally. The sequence will just terminate. So depending on your semantic requirement you'll have to choose the strategy that fits.
Try this:
(ns your-namespace
(:use clojure.contrib.seq-utils))
(defn messages-from [daemon-spec group-name]
(let [cnx (connect-to-deamon daemon-spec))
group (connect-to-group cnx group-name)]
(fill-queue (fn [fill]
(if done?
(do
(.leave group)
(.disconnect cnx)
(throw (RuntimeException. "Finished messages"))
(fill (.receive cnx))))))
Set done? to true when you want to terminate the list. Also, any exceptions thrown in (.receive cnx) will also terminate the list.