I'm writing an application server in Clojure that will use ClojureScript on the client.
I'd like to find an efficient, idiomatic way to push data from the server to the client as realtime events, ideally using a some combination of:
http-kit
core.async
Ring
(But I'm open to other possibilities)
Can anyone provide a good example / approach to doing this?
I prefer to use aleph, here is the wiki, you can simply use wrap-ring-handler function to wrap existed handlers.
For the 'push' function, most useful part is aleph's async handler. It builds on top of netty, not a one connection one thread model, so the server side do not need worry about the tcp connection count.
Some implement details:
Server side use aysnc handler, hold all the client connections (channels)
In 60(for example) seconds, if there is no 'new data', send an empty response
if server side has a response ,send it.
The client side can simply send a normal http request to the server
when the client get the response, handle the response body, and then, re-send a http request again
please check the client and all the proxy servers to set the correct timeout value
There are more ways here : http://en.wikipedia.org/wiki/Push_technology
I've been trying out the library Chord recently, and I really like it.
It provides a small core.async wrapper around the Websocket support in http-kit.
From the github page:
On the Server
(:require [chord.http-kit :refer [with-channel]]
[clojure.core.async :refer [<! >! put! close! go]])
(defn your-handler [req]
(with-channel req ws-ch
(go
(let [{:keys [message]} (<! ws-ch)]
(println "Message received:" message)
(>! ws-ch "Hello client from server!")
(close! ws-ch)))))
On the Client
(:require [chord.client :refer [ws-ch]]
[cljs.core.async :refer [<! >! put! close!]])
(:require-macros [cljs.core.async.macros :refer [go]])
(go
(let [ws (<! (ws-ch "ws://localhost:3000/ws"))]
(>! ws "Hello server from client!")))
I think it's still in early stages though - it doesn't handle disconnections yet.
I'm developed one project now, where I had the exact same requirement. I used the pedestal service in combination with core.async to implement SSE and it's working really well.
Unfortunately, I can't open-source this work now, but basically, I did something like the snippets below, only more complicated because of authentication. (It's not particularly easy to do authentication in SSE from browser, because you can't pass any custom headers in your new EventSource(SOME_URI); call.
So the snippets:
(ns chat-service.service
(:require [clojure.set :as set]
[clojure.core.async :as async :refer [<!! >!! <! >!]]
[cheshire.core :as json]
[io.pedestal.service.http :as bootstrap]
[io.pedestal.service.log :as log]
[io.pedestal.service.http.route :as route]
[io.pedestal.service.http.sse :as sse]
[io.pedestal.service.http.route.definition :refer [defroutes]]))
(def ^{:private true :doc "Formatting opts"} json-opts {:date-format "MMM dd, yyyy HH:mm:ss Z"})
(def ^{:private true :doc "Users to notification channels"} subscribers->notifications (atom {}))
;; private helper functions
(def ^:private generate-id #(.toString (java.util.UUID/randomUUID)))
(defn- sse-msg [event msg-data]
{:event event :msg msg-data})
;; service functions
(defn- remove-subscriber
"Removes transport channel from atom subscribers->notifications and tears down
SSE connection."
[transport-channel context]
(let [subscriber (get (set/map-invert #subscribers->notifications) transport-channel)]
(log/info :msg (str "Removing SSE connection for subscriber with ID : " subscriber))
(swap! subscribers->notifications dissoc subscriber)
(sse/end-event-stream context)))
(defn send-event
"Sends updates via SSE connection, takes also transport channel to close it
in case of the exception."
[transport-channel context {:keys [event msg]}]
(try
(log/info :msg "calling event sending fn")
(sse/send-event context event (json/generate-string msg json-opts))
(catch java.io.IOException ioe
(async/close! transport-channel))))
(defn create-transport-channel
"Creates transport channel with receiving end pushing updates to SSE connection.
Associates this transport channel in atom subscribers->notifications under random
generated UUID."
[context]
(let [temporary-id (generate-id)
channel (async/chan)]
(swap! subscribers->notifications assoc temporary-id channel)
(async/go-loop []
(when-let [payload (<! channel)]
(send-event channel context payload)
(recur))
(remove-subscriber channel context))
(async/put! channel (sse-msg "eventsourceVerification"
{:handshakeToken temporary-id}))))
(defn subscribe
"Subscribes anonymous user to SSE connection. Transport channel with timeout set up
will be created for pushing any new data to this connection."
[context]
(create-transport-channel context))
(defroutes routes
[[["/notifications/chat"
{:get [::subscribe (sse/start-event-stream subscribe)]}]]])
(def service {:env :prod
::bootstrap/routes routes
::bootstrap/resource-path "/public"
::bootstrap/type :jetty
::bootstrap/port 8081})
One "problem" i encountered is the default way how pedestal handles dropped SSE connections.
Because of the scheduled heartbeat job, it logs exception whenever connection is dropped and you didn't call end-event-stream context.
I wish there was a way to disable/tweak this behavior, or at least provide my own tear-down function which will be called whenever heartbeat job fails with EofException.
Related
I have a re-frame-based UI and try to communicate with my server using re-frame-http-fx. Sending and responding seems to work. However, I can't figure out how to parse the JSON body into a Clojure map on the server.
Here is my handler.clj as minimal as I could get it:
(ns my.handler
(:require [compojure.core :refer [GET POST defroutes]]
[compojure.route :refer [resources]]
[ring.util.response :refer [resource-response]]
[ring.middleware.json :refer [wrap-json-response wrap-json-body]]))
(defn json-post [request]
(let [body (:body request)]
(prn body)
body))
(defroutes routes
(GET "/" [] (resource-response "index.html" {:root "public"}))
(POST "/post" request json-post)
(resources "/"))
(def handler (wrap-json-response (wrap-json-body routes {:keywords? true})))
As far as I understand, the wrap-json-body middleware should replace the request body by a parsed version (a map?).
However, the output I get from (prn body) in my json-post handler is something like this:
#object[org.httpkit.BytesInputStream 0xda8b162 "BytesInputStream[len=41]"]
If I try something like (prn (:title body)) I get nil (although the original map-turned-json-request contains :title, as well as both the request and response body).
The request and response contain the correct json. The request Content-Type is correctly set to application/json (sent by re-frame-http-fx). The length of the buffer (41) is also the correct body length as per request.
I am running out of things to try. Any ideas?
While investigating the issue further, I found my error that lead to the effect. It concerns the dev-handler from the re-frame template that I conventiently omitted from my minimal example in the question.
I did not realize it was a problem, because the application seems to start fine even if you delete the entire definition of dev-handler from handler.clj, I assume because the server is initialized with handler in server.clj anyway (and the client does not fail fatally).
However, in project.clj of the re-frame template, the following is configured for figwheel:
:figwheel {:css-dirs ["resources/public/css"]
:ring-handler my.handler/dev-handler}
This leads to middleware configured for handler not being applied to my requests, thus not unwrapping the json body. Changing either the definition of dev-handler (the the same as handler in the question) or the configuration of figwheel in project.clj (to point to handler instead of dev-handler) solves the problem.
If anybody knows the reasoning of different handlers in project.clj and server.clj, feel free to let me know.
Usually Ring middleware is associated with the use on the server side.
In the post I'll discuss how the concept of ring middleware can be applied to http clients.
A very typical server side example might look like this:
(def server
(-> server-handler
wrap-transit-params
wrap-transit-response))
Desugared:
(def server (wrap-transit-response (wrap-transit-params handler)))
server is a function now, which accepts a request hash-map. Middleware can operate on this data before its send to the handler. It can also operate on the response hash-map that the handler returns. Or on both. It can even manipulate the execution of the handler.
Server
The above middleware could look like this in a very simplified way:
(1.) This operates on data before it gets to the actual handler (request, incoming data), parsing the body and providing the result as value to the :params key. It's called pre-wrap.
(defn wrap-transit-params [handler]
(fn [req]
(handler (assoc req :params (from-transit-str (req :body))))))
(2.) This one manipulates the outgoing data, the response of the server, outgoing data. - It's a post-wrap.
(defn wrap-tranist-response [handler]
(fn [req]
(let [resp (handler req)]
(update resp :body to-transit-str))))
With this a server can receive and respond data as transit.
Client
The same behavior could be desirable for an http-client.
(def client
(-> client-handler
wrap-transit-params
wrap-transit-response))
It turns out that the above middleware cannot easily be reused as client middleware, even though there is some symmetry.
For the client-side they should be implemented like this:
(defn wrap-transit-params [handler]
(fn [req]
(handler (assoc req :body (to-transit-str (req :params))))))
(defn wrap-transit-response [handler]
(fn [req]
(let [resp (handler req)]
(update resp :body from-transit-str))))
Now it could be used like this:
(client {:url "http://..."
:params {:one #{1 2 3}}})
Since in reality there would be much more things invloved, so I think having reusable middleware for both server and client side is utopian.
Though it remains up to discussion if the concept generally makes sense for clients. I could not find any client side middleware on the net.
The server side middleware is usually under the namespace ring.middleware... My concrete question here is if a library providing client side middleware should use this namespace or not.
Unless you believe that code that uses your new library could be written to be completely portable between client and server, I feel that using a different namespace will lead to less confusion. Find a fun name! :)
During my quest to learn Clojure I am currently facing problems with setting up websocket communitation. After many different approaches, I ended up using aleph.
What I managed to achieve:
handling of a new client connecting
handling a client disconnection
talking from the server to clients at will
What I lack is means to trigger a handler function whenever one of the connected clients sends something via the websocket.
My code so far:
(ns wonders7.core.handler
(:require [compojure.core :refer :all]
[compojure.route :as route]
[ring.middleware.defaults :refer [wrap-defaults site-defaults]]
[aleph.http :as http]
[manifold.stream :as stream]
[clojure.tools.logging :refer [info]]))
(defn uuid [] (str (java.util.UUID/randomUUID)))
(def clients (atom {}))
(defn ws-create-handler [req]
(let [ws #(http/websocket-connection req)]
(info "ws-create-handler")
(stream/on-closed ws #(swap! clients dissoc ws))
(swap! clients assoc ws (uuid))))
(defroutes app-routes
(GET "/ws" [] ws-create-handler)
(route/not-found "Not Found"))
(def app
(wrap-defaults app-routes site-defaults))
(defn msg-to-client [[client-stream uuid]]
(stream/put! client-stream "The server side says hello!"))
(defn msg-broadcast []
(map #(msg-to-client %) #clients))
;(stream/take! (first (first #clients)))
;(http/start-server app {:port 8080})
I start the Netty server with the commented out http/start-server aleph call. I also managed to fetch messages from the client via manual stream/take! call (also commented out). What I need to figure out is how to trigger this taking automatically when something comes in.
Thanks in advance for any help!
The function you're looking for is (manifold.stream/consume callback stream), which will invoke the callback for each message that comes off the stream.
in This example the author uses recieve-all and siphon from aleph to accomplish a very similar task which I'll roughly paraphrase as:
(let [chat (named-channel room (receive-all ch #(println "message: " %)))]
(siphon chat ch)
The following program, when run from an überjar, exits at the end only when using the in-memory Datomic database; when connecting to the Datomic server, it hangs indefinitely rather than exiting the JVM:
(ns myns.example
(:use [datomic.api :only [db q] :as d])
(:gen-class))
;; WORKS: (def uri "datomic:mem://testdb")
(def uri "datomic:free://localhost:4334/testdb2")
(defn -main []
(println 1)
(when (d/create-database uri)
(d/connect uri))
(shutdown-agents)
(println 2))
Run as:
lein uberjar && java -cp target/myns-0.1.0-SNAPSHOT-standalone.jar myns.example
Outputs:
1
2
and hangs. It only hangs if the DB doesn't exist when the program starts.
Anyone know why, or how to fix? This is with both datomic-free-0.8.4020.26 and datomic-free-0.8.3941.
UPDATE -- the above program does actually terminate, but it takes a very long time (> 1 minute). I'd like to know why.
shutdown-agents takes up to one minute to complete (assuming no agents are running an action).
This is due to the way java.util.concurrent cached thread pools work.
Use datomic.api/shutdown
shutdown
function
Usage: (shutdown shutdown-clojure)
Shut down all peer
resources. This method should be called as part of clean shutdown of
a JVM process. Will release all Connections, and, if shutdown-clojure
is true, will release Clojure resources. Programs written in Clojure
can set shutdown-clojure to false if they manage Clojure resources
(e.g. agents) outside of Datomic; programs written in other JVM
languages should typically set shutdown-clojure to true.
Added in Datomic Clojure version 0.8.3861
(ns myns.example
(:require [datomic.api :as d])
(:gen-class))
(def uri "datomic:free://localhost:4334/testdb2")
(defn -main []
(d/create-database uri)
(let [conn (d/connect uri)]
(try
;; do something
(finally (d/shutdown true)))
I put together a simple socket server (see below). Currently, it is not capable of handling multiple/concurrent requests. How can I make the socket server more efficient -- i.e. capable of handling concurrent requests? Are there any clojure constructs I can leverage? Thus far, I've thought of using either java's NIO (instead of IO) or netty (as pointed out here).
(ns server.policy
(:import
(java.net ServerSocket SocketException)
java.io.PrintWriter))
(defn create-socket
"Creates a socket on given port."
[port]
(ServerSocket. port))
(defn get-writer
"Create a socket file writer."
[client]
(PrintWriter. (.getOutputStream client)))
(defn listen-and-respond
"Accepts connection and responds."
[server-socket service]
(let [client (.accept server-socket)
socket-writer (get-writer client)]
(service socket-writer)))
(defn policy-provider
"Returns domain policy content."
[socket-writer]
(.print socket-writer "<content>This is a test</content>")
(.flush socket-writer)
(.close socket-writer))
(defn run-server
[port]
(let [server-socket (create-socket port)]
(while (not (.isClosed server-socket))
(listen-and-respond server-socket policy-provider))))
I have had success using Netty directly. However, if you want something that feels a little more like idiomatic Clojure code, take a look at the aleph library. It uses Netty internally, but results in much more simple code:
(use 'lamina.core 'aleph.tcp)
(defn echo-handler [channel client-info]
(siphon channel channel))
(start-tcp-server echo-handler {:port 1234})
Also, keep in mind that sometimes you need to reference the lamina documentation in addition to the aleph documentation.