XA context for immutant listener - clojure

I am trying to use immutant to manage transactions across HornetQ and mysql. As I understand the docs, to do this I must use XA transactions, because I am running a standalone app and not inside an app server.
However when I try and set :xa? for the context of my application, I get exceptions when tying to setup a listener.
(ns example
(:require [immutant.messaging :as msg]))
(def capture (atom nil))
(let [ctx (msg/context :host "localhost" :xa? true)
queue (msg/queue "example" :context ctx)]
(reset! capture nil)
(msg/listen queue (fn [m] (reset! capture m)))
(msg/publish queue {:my :msg}))
This throws "java.lang.IllegalStateException: You can't create a child context from an XA context." from the (msg/listen) invocation. What am I doing wrong?

I think you've discovered a bug, but in your case, I think there's a workaround: you only need that :xa? true option if your queue is remote. You can still create an XA transaction binding your HornetQ actions to MySQL in your listener handler using the immutant.transactions/transaction macro. See the docs for an example.

Related

How to listen to slack messages using clojure?

I am using clj-slack start method for getting the web-socket url and then passing it to the connect method of gniazdo for listening to the web-socket.
(require '[clj-slack.rtm :as slack])
(require '[gniazdo.core :as ws])
(def connection {:api-url "https://slack.com/api" :token "your token"})
(defn reset-conn [options]
(def socket
(ws/connect
(:url (slack/start options))
:on-receive #(prn 'received %))))
(reset-conn connection)
This code is working fine but it only provides information about who is active or away for slack:
received"{\"type\":\"presence_change\",\"presence\":\"active\",\"user\":\"U7BCQHWHY\"}"
But I also want to listen to the messages.
I don't know why messages are not printed though it was stated in slack-api documentation that rtm(real time messaging) is for messaging session.

How to test http requests with cljs-http using lein doo phantom

While running my tests with lein doo phantom, I receive a -1 status response and an empty string as the body. However, when I run the test in the repl, I am able to retrieve the request data with a 200 status response and the appropriate data in the body. Is this because a manytomany channel is being returned first as mentioned below, thus giving me the inappropriate response? If so, how could I account for this?
https://github.com/r0man/cljs-http#async-response-handling
I also thought maybe I need to use a timeout to wait for the request to complete. If so, how would I apply that appropriately with my existing code? It looks like cljs-http has :timeout as a parameter but I haven't been ably to get it to work appropriately (assuming this is the cause of the issue).
(deftest test-async
(async done
(go (let [response (<! (http/get "http://localhost:3000/api/user/1"
{:with-credentials? false
:query-params {"id" 1}}))]
(is (= {:status 200}
(select-keys response [:status]))))
(done))))
Since you are running your test under phantomjs. Phantomjs default disable cross domain XHR access and your tests js are running on localhost,all external ajax calls are denied.
you can set the --web-security=false to allow your test to do cross domain ajax.
In your project.clj add this
:doo {:paths {:phantom "phantomjs --web-security=false"}}
more info about phantomjs
http://phantomjs.org/api/command-line.html

Clojure establishing multiple database connections

We have a Clojure web application that is used by multiple projects (>20) that have multiple users logging in simultaneously. All projects have their own MySQL database. We have tried to figure out a way to use one application instance to serve requests from users that are delivered from their project's database.
The following script shows the principles of our multiple connections and should be executable in REPL (with correct database setup).
(ns testmultiple.core
(:require
[clojure.java.jdbc :as jdbc]
[compojure.core :refer [defroutes GET ANY routes context]]
[conman.core :as conman]
[mount.core :refer [defstate]]))
(def database-urls {:DB1 "jdbc:mysql://localhost:3306/DB1?user=DB1_user&password=DB1_password"
:DB2 "jdbc:mysql://localhost:3306/DB2?user=DB2_user&password=DB2_password"})
;; Connects to all databases in pool-specs
(defn connect!
[pool-specs]
(reduce merge (map (fn [pool-spec]
{(keyword (key pool-spec)) (conman/connect! {:jdbc-url (val pool-spec)})}) pool-specs)))
;; Disconnect from all databases in db-connections
(defn disconnect!
[db-connections]
(map (fn [db] (conman/disconnect! (val db))) db-connections))
;; Establish connections to all databases
;; and store connections in *dbs*
(defstate ^:dynamic *dbs*
:start (connect!
database-urls)
:stop (disconnect! *dbs*))
;; Bind queries to *db* dynamic variable which is bound
;; to each clients database before executing queries
;; The queries file defines the query get-user which
;; returns user by user id
(def ^:dynamic *db* nil)
(conman/bind-connection *db* "sql/queries.sql")
(mount.core/start)
; Define function that executes in current *db* binding
(defn getuser [id] (get-user {:id id}))
; Works, the user with Id 670 is returned from DB1
(with-bindings {#'*db* (:DB1 *dbs*)} (getuser 670))
; Works, the user with Id 670 is returned from DB2
(with-bindings {#'*db* (:DB2 *dbs*)} (getuser 670))
More specifically, the project is inferred from the URL request in the router. The following code shows the principle for the router. Accessing www.example.com/DB1/page1 and www.example.com/DB2/page2 will show page1 with data from DB1 and page2 with data from DB2, respectively.
(defn serve-page1 [] (str "page1" (getuser 670)))
(defn serve-page2 [] (str "page2" (getuser 670)))
(def home-routes
(context "/:project" [project]
(if (contains? *dbs* (keyword project))
(routes
(GET "/page1" []
(with-bindings {#'*db* ((keyword project) *dbs*)}
(serve-page1)))
(GET "/page2" []
(with-bindings {#'*db* ((keyword project) *dbs*)}
(serve-page2))))
(ANY "*" [] (str "Project not found")))))
This will be an application with considerable traffic. Notably, we are still in development phase and have thus not been able to test this solution with more than a couple of databases running on localhost. Our questions are
Is establishing multiple connections like this reasonable, stable and scalable?
Are there other better methods for the routing and dynamic binding of the project's database?
Is establishing multiple connections like this reasonable, stable and scalable?
Yes, this is a very reasonable approach. Very few database systems are limited by the number of outgoing connections. Both JDBC and Korma will handle this just fine in clojure. You do need to be aware of which requests are dependent on which DB when building the monitoring and ops related components of course. So you can tell which DB is causing problems.
Are there other better methods for the routing and dynamic binding of the project's database?
My only suggestion would be to explicitly pass the DB to each function rather than using a binding, though this is a personal style opinion and your approach will clearly work.

Fail to connect to queue with Immutant messaging

Currently I have an instance of ActiveMQ running, that I am attempting to connect to using immutant. Currently the code for this connection looks like so;
(defn make-ctx
[]
(log/debug "making context")
(let [ctx (m/context :host (:host immutant-host) :port (:port immutant-host))]
(log/debug "context created")
ctx))
(defn make-listener
[ctx]
(let [listener (m/listen topic #(log/debug %) :context ctx)]
(log/debug "listener created")
listener))
(defn immutant-test
[]
(log/debug "testing immutant messaging with ActiveMQ")
(let [ctx (make-ctx)
listener (make-listener ctx)]
(Thread/sleep 15000)
(.close listener)))
Though my code does not make it passed the make-ctx function. When it attempts to create the context I get the error
Exception in thread "main" java.lang.RuntimeException: javax.jms.JMSException: Failed to create session factory
at org.projectodd.wunderboss.messaging.jms.DestinationUtil.mightThrow(DestinationUtil.java:47)
at org.projectodd.wunderboss.messaging.jms.JMSMessagingSkeleton.createContext(JMSMessagingSkeleton.java:64)
at org.projectodd.wunderboss.messaging.jms.JMSMessagingSkeleton.createContext(JMSMessagingSkeleton.java:181)
at immutant.messaging$context.doInvoke(messaging.clj:84)
at clojure.lang.RestFn.invoke(RestFn.java:457)
at jms_test.core$make_ctx.invoke(core.clj:24)
at jms_test.core$immutant_test.invoke(core.clj:37)
at jms_test.core$_main.invoke(core.clj:158)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.AFn.applyTo(AFn.java:144)
at jms_test.core.main(Unknown Source)
Caused by: javax.jms.JMSException: Failed to create session factory
at org.hornetq.jms.client.HornetQConnectionFactory.createConnectionInternal(HornetQConnectionFactory.java:673)
at org.hornetq.jms.client.HornetQConnectionFactory.createConnection(HornetQConnectionFactory.java:112)
at org.hornetq.jms.client.HornetQConnectionFactory.createConnection(HornetQConnectionFactory.java:107)
at org.projectodd.wunderboss.messaging.jms.JMSMessagingSkeleton$1.call(JMSMessagingSkeleton.java:73)
at org.projectodd.wunderboss.messaging.jms.DestinationUtil.mightThrow(DestinationUtil.java:45)
... 10 more
Caused by: HornetQConnectionTimedOutException[errorType=CONNECTION_TIMEDOUT message=HQ119013: Timed out waiting to receive cluster topology. Group:null]
at org.hornetq.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:946)
at org.hornetq.jms.client.HornetQConnectionFactory.createConnectionInternal(HornetQConnectionFactory.java:669)
... 14 more
The immutant-host is defined as
(def immutant-host {:host "127.0.0.1" :port 61616})
I've been able to connect to my broker with the clamq libray ,and am able to send and receive messages with that. Though because the rest of the application is built with immutant messaging I'd like to stick with that library if possible to keep from having to support several messaging libraries.
Immutant is built on top of HornetQ, so can only connect to HornetQ servers by default. This is because the JMS spec doesn't provide a wire protocol, so each implementation has its own. However, if the remote ActiveMQ is actually Artemis, you can use wunderboss-artemis to enable using it from Immutant (note that the article states you have to use an incremental build of Immutant - that is no longer true, you can use Immutant 2.1.0).
If it's not Artemis, it wouldn't be too difficult to implement a wunderboss-activemq adapter using the artemis version as a guide.

How can I stop a specific agent in Clojure? When are their states garbage-collected?

If an agent is working through its queue in the background in Clojure, how can I stop it without stopping all agents?
When I am finished with an agent and I let it fall out of scope AND it finishes working on its queue, is it garbage collected along with its final state?
manage agents as data not threads
an agent is a data structure that is associated with a pool of threads and a queue of events. when events are available for agents then the threads in that pool take turns doing work on the agents until the thread pool gets full or the event (work) queue becomes empty.
an agent is garbage collected when the last reference to it goes out of scope.
if you bind a top level var to it it will stick around forever.
(def foo (agent {}))
if you bind it to a name in a function it will be GCd at the end of that function
(defn foo []
(let [foo (agent {})]
(send do-stuff foo)))
I don't see a direct message for canceling the work queue of an agent though you may be able to hack this by setting a validator on the agent that always returns false. This could cause the agent to stop working and wait for the agent error to be cleared.
if you want to kill an agent from code outside of the lexical scope where the agent was created you will need to store the agent in some mutable structure like an atom so you can remove the reference to the agent to allow it to be GCd.
(def my-agent (atom nil)) ;a persistent name for a transient agent
(swap! my-agent (make-new-agent)) ;create the agent
(send do-stuff #my-agent) ;use the agent
(swap! my-agent nil) ;clean up