How to use config immutant to implement quartz cluster? - clojure

I want to start several web-server, and every server has a quartz instance for avoiding the job being interrupted by restarting the server.
I found that immutant can config the single job .But when i run the server i found that the scheme use the not-cluster config.And i do not know how to config it.

Immutant has built-in support for singelton jobs, but it requires running your application in a WildFly cluster, and does not use Quartz's clustering functionality.
Quartz clustering requires a JDBC JobStore, and Immutant does not currently expose a way to set a JobStore for the scheduler instance. The clustering works by using the database to lock the job - it would not be difficult to implement something similar yourself, by scheduling the same job on every node in the cluster, and using an external store as a synchronization mechanism, allowing the job to run on only one node at a time.
If you truly need the clustering inplementation in Quartz, or need more control over scheduler creation than Immutant provides, please file an issue against Immutant to have those options exposed. In the interim, you could take a look at Quartzite, I believe it exposes more options for scheduler creation.

Related

akka-cluster: Which remote actor is executing the job

As I understand with akka-cluster, frontend node receives the request and sends the job to one of the backend nodes for executing it. I want to find out for debugging purpose how to know which of the backend node is executing the job?
Does akka also provides some UI where one can look for current job executions happening on different backends?
There is nothing in Akka Cluster that is specifically about work scheduling or frontend and backend nodes, that would just be one application out of many that you can possibly build on top of Akka Cluster and as such, if you would want an UI of some kind you would build that as well for your application.

Jetty - Un-deploy specific application

I have a single instance of Jetty that runs 5 web-applications.
I want to un-deploy one of this application, my first thought was to delete the context from: $JETTY_HOME/contexts, it works but it's not clearing the whole application (I saw that some scheduled tasks are still running).
So, I need another way to un-deploy the application, or some kind of a clean-up after removing the context.
Thanks in advance.
Deleting the war file or associated context XML file is generally enough to clear the context. You can also run .stop() from JMX if you have that module enabled in your Jetty instance.
In regards to Quartz, Jetty itself does not interact with scheduled jobs. This can be handled by the webapp itself signaling Quartz that it is exiting. Alternatively you can implement a ServletContextListener to instantiate the Quartz jobs which will handle the removal of contexts gracefully.
Related: Can you undeploy applications from JETTY?

WSO2 - Clustering AS on Custom Polling Applications

We have developed a custom JAX-WS application that essentially achieves two things.
Exposes a few web service methods to perform some functionality.
Utilizes org.quartz.Scheduler to schedule and execute some polling tasks that monitors and processes data on a few database tables. (The logic here is slightly complex, hence a custom application was chosen over the use of WSO2 DSS)
This application is uploaded on WSO2 AS 5.2.1 and runs quite seamlessly. However, I'm unsure what will happen if we have to cluster the AS application server. Logically, I would think that each node will have its own instance of the custom application running within it, and hence its own scheduler. Would this not increase the risk of processing the same record, across both instances. Is my interpretation of the above scenario correct, from a clustering perspective?
Yes.You are correct.In cluster of app server nodes each nodes will have its own instance of the application.In your case each node will have seperate scheduler.You may consider using tasks from ESB 4.9.0. there WSO2 has added coordination support to work in cluster environment.

GridEngine Or Akka

I am building an application that relies on some processing performed by a third party product (TPP). The distributors of this TPP recommend deploying it on GridEngine (for parallelisation etc...)
The interface to this TPP will be a REST API built on Scala & Akka.
Assuming the processing is something akin to handing off processing to a database or similar TPP, would I be be able to achieve this parallelisation using Akka and its Load Balancing, Routing, Cluster and Remote Actor features instead of GridEngine altogether?
My understanding of GridEngine is that it provides cluster management tools. It manages load among slaves and you hand it a job to complete and it allocates to an available slave. Is this possibly all achievable using just Akka? Would there be any specific reason to go with GridEngine?
Thanks

How to use run distribute tasks on worker nodes in a Clojure app?

On Python/Django stack, we were used to using Celery along with RabbitMQ.
Everything was easily done.
However when we tried doing the same thing in Clojure land, what we could get was Langhour.
In our current naive implementation we have a worker system which has three core parts.
Publisher module
Subscriber module
Task module
We can start the system on any node in either publisher or subscriber mode.
They are connected to RabbitMQ server.
They share one worker_queue.
What we are doing is creating tasks in Task module, and then when we want to run a task on subscriber. we send an expression call to the method, in EDN format to Subscriber which then decodes this and runs the actual task using eval.
Now is using eval safe ? we are not running expressions generated by user or any third party system.Initially we were planning to use JSON to send the payload message but then EDN gave us a lot more flexibility and it works like a charm, as of now.
Also is there a better way to do this ?
Depends on you needs (and your team), I highly suggest Storm Project. You will get a distributed, fault tolerant and realtime computation and it is really easy to use.
Another nice thing in Storm that it supports a plethora of options as the datasource for the topologies. It can be for example: Apache Kafka, RabbitMQ, Kestrel, MongoDB. If you aren't satisfied, then you can write your own driver.
It is also has a web interface to see what is happening in your topology.