I'm checking if YeSQL may help in my Clojure project, but I cannot find any example of YeSQL using a connection pool.
does this mean that YeSQL creates a new connection to every statement?
I also found an example on how to use transactions using clojure.java.jdbc/with-db-transaction, but I feel it is outdated (I haven't tried yet).
does this mean that YeSQL depends on clojure.java.jdbc to commit/rollback control? in this case, shouldn't I just use clojure.java.jdbc alone, since YeSQL doesn't offer too much more (other than naming my queries and externalizing them)?
thanks in advance
YeSQL doesn't handle connections or connection pooling. You need to handle it externally and provide a connection instance to the query function.
As you can see from the YeSQL example from README:
; Define a database connection spec. (This is standard clojure.java.jdbc.)
(def db-spec {:classname "org.postgresql.Driver"
:subprotocol "postgresql"
:subname "//localhost:5432/demo"
:user "me"})
; Use it standalone. Note that the first argument is the db-spec.
(users-by-country db-spec "GB")
;=> ({:count 58})
; Use it in a clojure.java.jdbc transaction.
(require '[clojure.java.jdbc :as jdbc])
(jdbc/with-db-transaction [connection db-spec]
{:limeys (users-by-country connection "GB")
:yanks (users-by-country connection "US")})
If you ask how to add connection pool handling you might check an example from Clojure Cookbook.
As for the transaction handling, YeSQL documentation is none, but source is quite easy to understand:
(defn- emit-query-fn
"Emit function to run a query.
- If the query name ends in `!` it will call `clojure.java.jdbc/execute!`,
- If the query name ends in `<!` it will call `clojure.java.jdbc/insert!`,
- otherwise `clojure.java.jdbc/query` will be used."
[{:keys [name docstring statement]}]
(let [split-query (split-at-parameters statement)
{:keys [query-args display-args function-args]} (split-query->args split-query)
jdbc-fn (cond
(= [\< \!] (take-last 2 name)) `insert-handler
(= \! (last name)) `execute-handler
:else `jdbc/query)]
`(def ~(fn-symbol (symbol name) docstring statement display-args)
(fn [db# ~#function-args]
(~jdbc-fn db#
(reassemble-query '~split-query
~query-args))))))
So it will just generate a function that will either call clojure.java.jdbc/execute! or clojure.java.jdbc/insert! with the generated query. You might need to refer to those functions' documentation for further details.
When doing transactions using YesQL, I wrap the YesQL calls in a clojure.java.jdbc/with-db-transation invocation, and pass the generated connection details in to the YesQL function call, which they'll use instead of the standard non-transaction connection if supplied. For example:
;; supply a name for the transaction connection, t-con, based on the existing db connection
(jdbc/with-db-transaction [t-con db-spec]
;; this is the yesql call with a second map specifying the transaction connection
(create-client-order<! {...} {:connection t-con}))
All wrapped YesQL calls using the {:connection t-con} map will be part of the same transaction.
Related
(defn my-func [opts]
(assoc opts :something :else))
What i want to be able to do, is serialize a reference to the function (maybe via #'my-func ?) to a string in such a way that i can upon deserializing it, invoke it with args.
How does this work?
Edit-- Why This is Not a Duplicate
The other question asked how to serialize a function body-- the entire function code. I am not asking how to do that. I am asking how to serialize a reference.
Imagine a cluster of servers all running the same jar, attached to a MQ. The MQ pubs in fn-reference and fn-args for functions in the jar, and the server in the cluster runs it and acks it. That's what i'm trying to do-- not pass function bodies around.
In some ways, this is like building a "serverless" engine in clojure.
Weirdly, a commit for serializing var identity was just added to Clojure yesterday: https://github.com/clojure/clojure/commit/a26dfc1390c53ca10dba750b8d5e6b93e846c067
So as of the latest master snapshot version, you can serialize a Var (like #'clojure.core/conj) and deserialize it on another JVM with access to the same loaded code, and invoke it.
(import [java.io File FileOutputStream FileInputStream ObjectOutputStream ObjectInputStream])
(defn write-obj [o f]
(let [oos (ObjectOutputStream. (FileOutputStream. (File. f)))]
(.writeObject oos o)
(.close oos)))
(defn read-obj [f]
(let [ois (ObjectInputStream. (FileInputStream. (File. f)))
o (.readObject ois)]
(.close ois)
o))
;; in one JVM
(write-obj #'clojure.core/conj "var.ser")
;; in another JVM
(read-obj "var.ser")
As suggested on the comments, if you can just serialize a keyword label for the function and store/retrieve that, you are finished.
If you need to transmit the function from one place to another, you essentially need to send the function source code as a string and then have it compiled via eval on the other end. This is what Datomic does when a Database Function is stored in the DB and automatically run by Datomic for any new additions/changes to the DB (these can perform automatic data validation, for example). See:
http://docs.datomic.com/database-functions.html
http://docs.datomic.com/clojure/index.html#datomic.api/function
As similar technique is used in the book Clojure in Action (1st Edition) for the distributed compute engine example using RabbitMQ.
I have a rabbitMQ connection that seems to be started at compile time (when I type lein compile) and then blocks the building of my project. Here are more details on the problem. Let us say this is the clojure file bla_test.clj
(import (com.rabbitmq.client ConnectionFactory Connection Channel QueueingConsumer))
;; And then we have to translate the equivalent java hello world program using
;; Clojure's excellent interop.
;; It feels very strange writing this sort of ceremony-oriented imperative code
;; in Clojure:
;; Make a connection factory on the local host
(def connection-factory
(doto (ConnectionFactory.)
(.setHost "localhost")))
;; and get it to make you a connection
(def connection (.newConnection connection-factory))
;; get that to make you a channel
(def channel (. connection createChannel))
;;HERE I WOULD LIKE TO USE THE SAME CONNECTION AND THE SAME CHANNEL INSTANCE AS OFTEN AS
;; I LIKE
(dotimes [ i 10 ]
(. channel basicPublish "" "hello" nil (. (format "Hello World! (%d)" i) getBytes)))
The clojure file above is part of a bigger clojure program that I build using lein. My problem is that when I compile with "lein compile", a connection is done because of the line (def connection (.newConnection connection-factory)) and then the compilation is stopped! How can I avoid this? Is there a way to compile without building connection? How can I manage to use the same instance of channel over several calls coming from external components?
Any help would be appreciated.
Regards,
Horace
The Clojure compiler must evaluate all top-level forms, because it can be required to run arbitrary code when expanding calls to macros.
The usual solution to issues like the one you describe is to define a top-level Var holding an object of a dereferenceable type, for example an atom or a promise, and have an initialization function provide the value at runtime. (You could also use a delay and specify the value inline; this is less flexible, since it makes it more difficult to use a different value for testing etc.)
I have a closure in which a future takes a do block. Each function inside the do block is provided by the arguments of the closure:
(defn accept-order
[persist record track notify log]
(fn [sponsor order]
(let [datetime (to-timestamp (local-now))
order (merge order {:network_reviewed_at datetime
:workflow_state "unconfirmed"
:sponsor_id (:id sponsor)})]
(future
(do
(persist order
(select-keys order [:network_reviewed_at
:workflow_state
:sponsor_id]))
(record sponsor order true)
(track)
(notify sponsor order)
(log sponsor order)))
order)))
No function in the do block is fired. If I deref the future, it works. If I remove the future it works. If I run from a REPL, it works. But if I run lein test, it won't work.
Any ideas? Thank you!
Adding a (Thread/sleep 2000) to a test invoking your function causes the future to run, so I'd venture a guess that Leiningen is killing the VM before your future gets to run (or at least before it manages to cause its side effects). Leiningen does kill the VM immediately after running tests.
As a side note, you don't need the do. future takes a body, not a single expression.
I'm confused by how calls with carmine should be done. I found the wcar macro described in carmine's docs:
(defmacro wcar [& body] `(car/with-conn pool spec-server1 ~#body))
Do I really have to call wcar every time I want to talk to redis in addition to the redis command? Or can I just call it once at the beginning? If so how?
This is what some code with tavisrudd's redis library looked like (from my toy url shortener project's testsuite):
(deftest test_shorten_doesnt_exist_create_new_next
(redis/with-server test-server
(redis/set "url_counter" 51)
(shorten test-url)
(is (= "1g" (redis/get (str "urls|" test-url))))
(is (= test-url (redis/get "shorts|1g")))))
And now I can only get it working with carmine by writing it like this:
(deftest test_shorten_doesnt_exist_create_new_next
(wcar (car/set "url_counter" 51))
(shorten test-url)
(is (= "1g" (wcar (car/get (str "urls|" test-url)))))
(is (= test-url (wcar (car/get "shorts|1g")))))
So what's the right way of using it and what underlying concept am I not getting?
Dan's explanation is correct.
Carmine uses response pipelining by default, whereas redis-clojure requires you to ask for pipelining when you want it (using the pipeline macro).
The main reason you'd want pipelining is for performance. Redis is so fast that the bottleneck in using it is often the time it takes for the request+response to travel over the network.
Clojure destructuring provides a convenient way of dealing with the pipelined response, but it does require writing your code differently to redis-clojure. The way I'd write your example is something like this (I'm assuming your shorten fn has side effects and needs to be called before the GETs):
(deftest test_shorten_doesnt_exist_create_new_next
(wcar (car/set "url_counter" 51))
(shorten test-url)
(let [[response1 response2] (wcar (car/get (str "urls|" test-url))
(car/get "shorts|1g"))]
(is (= "1g" response1))
(is (= test-url response2))))
So we're sending the first (SET) request to Redis and waiting for the reply (I'm not certain if that's actually necessary here). We then send the next two (GET) requests at once, allow Redis to queue the responses, then receive them all back at once as a vector that we'll destructure.
At first this may seem like unnecessary extra effort because it requires you to be explicit about when to receive queued responses, but it brings a lot of benefits including performance, clarity, and composable commands.
I'd check out Touchstone on GitHub if you're looking for an example of what I'd consider idiomatic Carmine use (just search for the wcar calls). (Sorry, SO is preventing me from including another link).
Otherwise just pop me an email (or file a GitHub issue) if you have any other questions.
Don't worry, you're using it the correct way already.
The Redis request functions (such as the get and set that you're using above) are all routed through another function send-request! that relies on a dynamically bound *context* to provide the connection. Attempting to call any of these Redis commands without that context will fail with a "no context" error. The with-conn macro (used in wcar) sets that context and provides the connection.
The wcar macro is then just a thin wrapper around with-conn making the assumption that you will be using the same connection details for all Redis requests.
So far this is all very similar to how Tavis Rudd's redis-clojure works.
So, the question now is why does Carmine need multiple wcar's when Redis-Clojure only required a single with-server?
And the answer is, it doesn't. Apart from sometimes, when it does. Carmine's with-conn uses Redis's "Pipelining" to send multiple requests with the same connection and then package the responses together in a vector. The example from the README shows this in action.
(wcar (car/ping)
(car/set "foo" "bar")
(car/get "foo"))
=> ["PONG" "OK" "bar"]
Here you will see that ping, set and get are only concerned with sending the request, leaving the receiving of response up to wcar. This precludes asserts (or any result access) from inside of wcar and leads to the separation of requests and multiple wcar calls that you have.
I'm working with a messaging toolkit (it happens to be Spread but I don't know that the details matter). Receiving messages from this toolkit requires some boilerplate:
Create a connection to the daemon.
Join a group.
Receive one or more messages.
Leave the group.
Disconnect from the daemon.
Following some idioms that I've seen used elsewhere, I was able to cook up some working functions using Spread's Java API and Clojure's interop forms:
(defn connect-to-daemon
"Open a connection"
[daemon-spec]
(let [connection (SpreadConnection.)
{:keys [host port user]} daemon-spec]
(doto connection
(.connect (InetAddress/getByName host) port user false false))))
(defn join-group
"Join a group on a connection"
[cxn group-name]
(doto (SpreadGroup.)
(.join cxn group-name)))
(defn with-daemon*
"Execute a function with a connection to the specified daemon"
[daemon-spec func]
(let [daemon (merge *spread-daemon* daemon-spec)
cxn (connect-to-daemon daemon-spec)]
(try
(binding [*spread-daemon* (assoc daemon :connection cxn)]
(func))
(finally
(.disconnect cxn)))))
(defn with-group*
"Execute a function while joined to a group"
[group-name func]
(let [cxn (:connection *spread-daemon*)
grp (join-group cxn group-name)]
(try
(binding [*spread-group* grp]
(func))
(finally
(.leave grp)))))
(defn receive-message
"Receive a single message. If none are available, this will block indefinitely."
[]
(let [cxn (:connection *spread-daemon*)]
(.receive cxn)))
(Basically the same idiom as with-open, just that the SpreadConnection class uses disconnect instead of close. Grr. Also, I left out some macros that aren't relevant to the structural question here.)
This works well enough. I can call receive-message from inside of a structure like:
(with-daemon {:host "localhost" :port 4803}
(with-group "aGroup"
(... looping ...
(let [msg (receive-message)]
...))))
It occurs to me that receive-message would be cleaner to use if it were an infinite lazy sequence that produces messages. So, if I wanted to join a group and get messages, the calling code should look something like:
(def message-seq (messages-from {:host "localhost" :port 4803} "aGroup"))
(take 5 message-seq)
I've seen plenty of examples of lazy sequences without cleanup, that's not too hard. The catch is steps #4 and 5 from above: leaving the group and disconnecting from the daemon. How can I bind the state of the connection and group into the sequence and run the necessary cleanup code when the sequence is no longer needed?
This article describes how to do exactly that using clojure-contrib fill-queue. Regarding cleanup - the neat thing about fill-queue is that you can supply a blocking function that cleans itself up if there is an error or some condition reached. You can also hold a reference to the resource to control it externally. The sequence will just terminate. So depending on your semantic requirement you'll have to choose the strategy that fits.
Try this:
(ns your-namespace
(:use clojure.contrib.seq-utils))
(defn messages-from [daemon-spec group-name]
(let [cnx (connect-to-deamon daemon-spec))
group (connect-to-group cnx group-name)]
(fill-queue (fn [fill]
(if done?
(do
(.leave group)
(.disconnect cnx)
(throw (RuntimeException. "Finished messages"))
(fill (.receive cnx))))))
Set done? to true when you want to terminate the list. Also, any exceptions thrown in (.receive cnx) will also terminate the list.