Ring Middleware for the Client Side? - clojure

Usually Ring middleware is associated with the use on the server side.
In the post I'll discuss how the concept of ring middleware can be applied to http clients.
A very typical server side example might look like this:
(def server
(-> server-handler
wrap-transit-params
wrap-transit-response))
Desugared:
(def server (wrap-transit-response (wrap-transit-params handler)))
server is a function now, which accepts a request hash-map. Middleware can operate on this data before its send to the handler. It can also operate on the response hash-map that the handler returns. Or on both. It can even manipulate the execution of the handler.
Server
The above middleware could look like this in a very simplified way:
(1.) This operates on data before it gets to the actual handler (request, incoming data), parsing the body and providing the result as value to the :params key. It's called pre-wrap.
(defn wrap-transit-params [handler]
(fn [req]
(handler (assoc req :params (from-transit-str (req :body))))))
(2.) This one manipulates the outgoing data, the response of the server, outgoing data. - It's a post-wrap.
(defn wrap-tranist-response [handler]
(fn [req]
(let [resp (handler req)]
(update resp :body to-transit-str))))
With this a server can receive and respond data as transit.
Client
The same behavior could be desirable for an http-client.
(def client
(-> client-handler
wrap-transit-params
wrap-transit-response))
It turns out that the above middleware cannot easily be reused as client middleware, even though there is some symmetry.
For the client-side they should be implemented like this:
(defn wrap-transit-params [handler]
(fn [req]
(handler (assoc req :body (to-transit-str (req :params))))))
(defn wrap-transit-response [handler]
(fn [req]
(let [resp (handler req)]
(update resp :body from-transit-str))))
Now it could be used like this:
(client {:url "http://..."
:params {:one #{1 2 3}}})
Since in reality there would be much more things invloved, so I think having reusable middleware for both server and client side is utopian.
Though it remains up to discussion if the concept generally makes sense for clients. I could not find any client side middleware on the net.
The server side middleware is usually under the namespace ring.middleware... My concrete question here is if a library providing client side middleware should use this namespace or not.

Unless you believe that code that uses your new library could be written to be completely portable between client and server, I feel that using a different namespace will lead to less confusion. Find a fun name! :)

Related

Parse JSON body from HTTP request (with ring and re-frame-http-fx)

I have a re-frame-based UI and try to communicate with my server using re-frame-http-fx. Sending and responding seems to work. However, I can't figure out how to parse the JSON body into a Clojure map on the server.
Here is my handler.clj as minimal as I could get it:
(ns my.handler
(:require [compojure.core :refer [GET POST defroutes]]
[compojure.route :refer [resources]]
[ring.util.response :refer [resource-response]]
[ring.middleware.json :refer [wrap-json-response wrap-json-body]]))
(defn json-post [request]
(let [body (:body request)]
(prn body)
body))
(defroutes routes
(GET "/" [] (resource-response "index.html" {:root "public"}))
(POST "/post" request json-post)
(resources "/"))
(def handler (wrap-json-response (wrap-json-body routes {:keywords? true})))
As far as I understand, the wrap-json-body middleware should replace the request body by a parsed version (a map?).
However, the output I get from (prn body) in my json-post handler is something like this:
#object[org.httpkit.BytesInputStream 0xda8b162 "BytesInputStream[len=41]"]
If I try something like (prn (:title body)) I get nil (although the original map-turned-json-request contains :title, as well as both the request and response body).
The request and response contain the correct json. The request Content-Type is correctly set to application/json (sent by re-frame-http-fx). The length of the buffer (41) is also the correct body length as per request.
I am running out of things to try. Any ideas?
While investigating the issue further, I found my error that lead to the effect. It concerns the dev-handler from the re-frame template that I conventiently omitted from my minimal example in the question.
I did not realize it was a problem, because the application seems to start fine even if you delete the entire definition of dev-handler from handler.clj, I assume because the server is initialized with handler in server.clj anyway (and the client does not fail fatally).
However, in project.clj of the re-frame template, the following is configured for figwheel:
:figwheel {:css-dirs ["resources/public/css"]
:ring-handler my.handler/dev-handler}
This leads to middleware configured for handler not being applied to my requests, thus not unwrapping the json body. Changing either the definition of dev-handler (the the same as handler in the question) or the configuration of figwheel in project.clj (to point to handler instead of dev-handler) solves the problem.
If anybody knows the reasoning of different handlers in project.clj and server.clj, feel free to let me know.

Server push of data from Clojure to ClojureScript

I'm writing an application server in Clojure that will use ClojureScript on the client.
I'd like to find an efficient, idiomatic way to push data from the server to the client as realtime events, ideally using a some combination of:
http-kit
core.async
Ring
(But I'm open to other possibilities)
Can anyone provide a good example / approach to doing this?
I prefer to use aleph, here is the wiki, you can simply use wrap-ring-handler function to wrap existed handlers.
For the 'push' function, most useful part is aleph's async handler. It builds on top of netty, not a one connection one thread model, so the server side do not need worry about the tcp connection count.
Some implement details:
Server side use aysnc handler, hold all the client connections (channels)
In 60(for example) seconds, if there is no 'new data', send an empty response
if server side has a response ,send it.
The client side can simply send a normal http request to the server
when the client get the response, handle the response body, and then, re-send a http request again
please check the client and all the proxy servers to set the correct timeout value
There are more ways here : http://en.wikipedia.org/wiki/Push_technology
I've been trying out the library Chord recently, and I really like it.
It provides a small core.async wrapper around the Websocket support in http-kit.
From the github page:
On the Server
(:require [chord.http-kit :refer [with-channel]]
[clojure.core.async :refer [<! >! put! close! go]])
(defn your-handler [req]
(with-channel req ws-ch
(go
(let [{:keys [message]} (<! ws-ch)]
(println "Message received:" message)
(>! ws-ch "Hello client from server!")
(close! ws-ch)))))
On the Client
(:require [chord.client :refer [ws-ch]]
[cljs.core.async :refer [<! >! put! close!]])
(:require-macros [cljs.core.async.macros :refer [go]])
(go
(let [ws (<! (ws-ch "ws://localhost:3000/ws"))]
(>! ws "Hello server from client!")))
I think it's still in early stages though - it doesn't handle disconnections yet.
I'm developed one project now, where I had the exact same requirement. I used the pedestal service in combination with core.async to implement SSE and it's working really well.
Unfortunately, I can't open-source this work now, but basically, I did something like the snippets below, only more complicated because of authentication. (It's not particularly easy to do authentication in SSE from browser, because you can't pass any custom headers in your new EventSource(SOME_URI); call.
So the snippets:
(ns chat-service.service
(:require [clojure.set :as set]
[clojure.core.async :as async :refer [<!! >!! <! >!]]
[cheshire.core :as json]
[io.pedestal.service.http :as bootstrap]
[io.pedestal.service.log :as log]
[io.pedestal.service.http.route :as route]
[io.pedestal.service.http.sse :as sse]
[io.pedestal.service.http.route.definition :refer [defroutes]]))
(def ^{:private true :doc "Formatting opts"} json-opts {:date-format "MMM dd, yyyy HH:mm:ss Z"})
(def ^{:private true :doc "Users to notification channels"} subscribers->notifications (atom {}))
;; private helper functions
(def ^:private generate-id #(.toString (java.util.UUID/randomUUID)))
(defn- sse-msg [event msg-data]
{:event event :msg msg-data})
;; service functions
(defn- remove-subscriber
"Removes transport channel from atom subscribers->notifications and tears down
SSE connection."
[transport-channel context]
(let [subscriber (get (set/map-invert #subscribers->notifications) transport-channel)]
(log/info :msg (str "Removing SSE connection for subscriber with ID : " subscriber))
(swap! subscribers->notifications dissoc subscriber)
(sse/end-event-stream context)))
(defn send-event
"Sends updates via SSE connection, takes also transport channel to close it
in case of the exception."
[transport-channel context {:keys [event msg]}]
(try
(log/info :msg "calling event sending fn")
(sse/send-event context event (json/generate-string msg json-opts))
(catch java.io.IOException ioe
(async/close! transport-channel))))
(defn create-transport-channel
"Creates transport channel with receiving end pushing updates to SSE connection.
Associates this transport channel in atom subscribers->notifications under random
generated UUID."
[context]
(let [temporary-id (generate-id)
channel (async/chan)]
(swap! subscribers->notifications assoc temporary-id channel)
(async/go-loop []
(when-let [payload (<! channel)]
(send-event channel context payload)
(recur))
(remove-subscriber channel context))
(async/put! channel (sse-msg "eventsourceVerification"
{:handshakeToken temporary-id}))))
(defn subscribe
"Subscribes anonymous user to SSE connection. Transport channel with timeout set up
will be created for pushing any new data to this connection."
[context]
(create-transport-channel context))
(defroutes routes
[[["/notifications/chat"
{:get [::subscribe (sse/start-event-stream subscribe)]}]]])
(def service {:env :prod
::bootstrap/routes routes
::bootstrap/resource-path "/public"
::bootstrap/type :jetty
::bootstrap/port 8081})
One "problem" i encountered is the default way how pedestal handles dropped SSE connections.
Because of the scheduled heartbeat job, it logs exception whenever connection is dropped and you didn't call end-event-stream context.
I wish there was a way to disable/tweak this behavior, or at least provide my own tear-down function which will be called whenever heartbeat job fails with EofException.

Reading Ring request body when already read

My question is, how can I idiomatically read the body of a Ring request if it has already been read?
Here's the background. I'm writing an error handler for a Ring app. When an error occurs, I want to log the error, including all relevant information that I might need to reproduce and fix the error. One important piece of information is the body of the request. However, the statefulness of the :body value (because it is a type of java.io.InputStream object) causes problems.
Specifically, what happens is that some middleware (the ring.middleware.json/wrap-json-body middleware in my case) does a slurp on the body InputStream object, which changes the internal state of the object such that future calls to slurp return an empty string. Thus, the [content of the] body is effectively lost from the request map.
The only solution I can think of is to preemptively copy the body InputStream object before the body can be read, just in case I might need it later. I don't like this approach because it seems clumsy to do some work on every request just in case there might be an error later. Is there a better approach?
I have a lib that sucks up the body, replaces it with a stream with identical contents, and stores the original so that it can be deflated later.
groundhog
This is not adequate for indefinitely open streams, and is a bad idea if the body is the upload of some large object. But it helps for testing, and recreating error conditions as a part of the debugging process.
If all you need is a duplicate of the stream, you can use the tee-stream function from groundhog as the basis for your own middleware.
I adopted #noisesmith's basic approach with a few modifications, as shown below. Each of these functions can be used as Ring middleware.
(defn with-request-copy
"Transparently store a copy of the request in the given atom.
Blocks until the entire body is read from the request. The request
stored in the atom (which is also the request passed to the handler)
will have a body that is a fresh (and resettable) ByteArrayInputStream
object."
[handler atom]
(fn [{orig-body :body :as request}]
(let [{body :stream} (groundhog/tee-stream orig-body)
request-copy (assoc request :body body)]
(reset! atom request-copy)
(handler request-copy))))
(defn wrap-error-page
"In the event of an exception, do something with the exception
(e.g. report it using an exception handling service) before
returning a blank 500 response. The `handle-exception` function
takes two arguments: the exception and the request (which has a
ready-to-slurp body)."
[handler handle-exception]
;; Note that, as a result of this top-level approach to
;; error-handling, the request map sent to Rollbar will lack any
;; information added to it by one of the middleware layers.
(let [request-copy (atom nil)
handler (with-request-copy handler request-copy)]
(fn [request]
(try
(handler request)
(catch Throwable e
(.reset (:body #request-copy))
;; You may also want to wrap this line in a try/catch block.
(handle-exception e #request-copy)
{:status 500})))))
I think you're stuck with some sort of "keep a copy around just in case" strategy. Unfortunately it looks like :body on the request must be an InputStream and nothing else (on the response it can be a String or other things which is why I mention it)
Sketch: In a very early middleware, wrap the :body InputStream in an InputStream that resets itself on close (example). Not all InputStreams can be reset, so you may need to do some copying here. Once wrapped, the stream can be re-read on close, and you're good. There's memory risk here if you have giant requests.
Update: here's an half-baked attempt, inspired in part by tee-stream in groundhog.
(require '[clojure.java.io :refer [copy]])
(defn wrap-resettable-body
[handler]
(fn [request]
(let [orig-body (:body request)
baos (java.io.ByteArrayOutputStream.)
_ (copy orig-body baos)
ba (.toByteArray baos)
bais (java.io.ByteArrayInputStream. ba)
;; bais doesn't need to be closed, and supports resetting, so wrap it
;; in a delegating proxy that calls its reset when closed.
resettable (proxy [java.io.InputStream] []
(available [] (.available bais))
(close [] (.reset bais))
(mark [read-limit] (.mark bais read-limit))
(markSupported [] (.markSupported bais))
;; exercise to reader: proxy with overloaded methods...
;; (read [] (.read bais))
(read [b off len] (.read bais b off len))
(reset [] (.reset bais))
(skip [n] (.skip bais)))
updated-req (assoc request :body resettable)]
(handler updated-req))))

Lamina Batched Queue

I'm trying to write a web service that takes requests, puts them into a queue, and then processes them in batches of 2. The response can be sent straight away, and I'm trying to use Lamina as follows (though not sure it's the right choice)...
(def ch (channel))
(def loop-forever
(comp doall repeatedly))
(defn consumer []
(loop-forever
(fn []
(process-batch
#(read-channel ch)
#(read-channel ch)))))
(def handler [req]
(enqueue ch req)
{:status 200
:body "ok"})
But this doesn't work... :( I've been through all the Lamina docs but can't get my head around how to use these channels. Can anyone confirm if Lamina supports this kind of behaviour and advise on a possible solution?
The point of lamina is that you don't want to loop forever: you want lamina's scheduler to use a thread from its pool to do work for you whenever you have enough data to do work on. So instead of using the very, very low-level read-channel function, use receive to register a callback once, or (more often) receive-all to register a callback for every time a channel receives data. For example:
(def ch (lamina/channel))
(lamina/receive-all (lamina/partition* 2 channel)
(partial apply process-batch))
(defn handler [req]
(lamina/enqueue ch req)
{:status 200
:body "ok"})

How to Manage Multiple Connections?

I put together a simple socket server (see below). Currently, it is not capable of handling multiple/concurrent requests. How can I make the socket server more efficient -- i.e. capable of handling concurrent requests? Are there any clojure constructs I can leverage? Thus far, I've thought of using either java's NIO (instead of IO) or netty (as pointed out here).
(ns server.policy
(:import
(java.net ServerSocket SocketException)
java.io.PrintWriter))
(defn create-socket
"Creates a socket on given port."
[port]
(ServerSocket. port))
(defn get-writer
"Create a socket file writer."
[client]
(PrintWriter. (.getOutputStream client)))
(defn listen-and-respond
"Accepts connection and responds."
[server-socket service]
(let [client (.accept server-socket)
socket-writer (get-writer client)]
(service socket-writer)))
(defn policy-provider
"Returns domain policy content."
[socket-writer]
(.print socket-writer "<content>This is a test</content>")
(.flush socket-writer)
(.close socket-writer))
(defn run-server
[port]
(let [server-socket (create-socket port)]
(while (not (.isClosed server-socket))
(listen-and-respond server-socket policy-provider))))
I have had success using Netty directly. However, if you want something that feels a little more like idiomatic Clojure code, take a look at the aleph library. It uses Netty internally, but results in much more simple code:
(use 'lamina.core 'aleph.tcp)
(defn echo-handler [channel client-info]
(siphon channel channel))
(start-tcp-server echo-handler {:port 1234})
Also, keep in mind that sometimes you need to reference the lamina documentation in addition to the aleph documentation.