First of all, I'll explain the scenario.
I have two nodes of WSO2 ESB (4.9.0) and two nodes of Apache ActiveMQ, running in four different machines. Currently, each ESB node points to one node of ActiveMQ:
ESB1 -> AMQ1
ESB2 -> AMQ2
In this case, if one AMQ node goes down, only one node of ESB will be able to produce and consume messages.
Thinking about a way to solve this, and implement high availability, I've found a section in ActiveMQ documentation that describes an url syntax to configure failover.
So, in each ESB, I've configured a Message Processor attached to a Message Store with the provider url configured as the docs says:
failover:(tcp://soamsg01:61616,tcp://soamsg02:61616)
Now the message storing process was "partially" solved. The insertion of messages occurs always in the same AMQ node in each ESB, but if the node goes down, the insertion goes to the another.
The main problem is with the consumption. When the Message Processor starts, it chooses randomly one of the AMQ nodes to connect. So, the two ESB nodes may connect to the same node of AMQ, causing one of the queues to be never read.
There's a way to solve this?
A solution is to configure ActiveMQ in master/slave. The goal is to have the different ActiveMQ nodes using the same database : you can share kahadb in a shared filesystem or you can use a shared database, see http://activemq.apache.org/masterslave.html
All your ESB nodes will connect to the same ActiveMQ instance and in case of failure, a passive node will become active and all your ESB nodes will connect to it. This new ActiveMQ node will continue to serve the sames queues and topics with the same content because thoses datas are stored in a unique database.
Related
As I understand with akka-cluster, frontend node receives the request and sends the job to one of the backend nodes for executing it. I want to find out for debugging purpose how to know which of the backend node is executing the job?
Does akka also provides some UI where one can look for current job executions happening on different backends?
There is nothing in Akka Cluster that is specifically about work scheduling or frontend and backend nodes, that would just be one application out of many that you can possibly build on top of Akka Cluster and as such, if you would want an UI of some kind you would build that as well for your application.
Using PyMQI, which is fine on IBM MQ Single instance queues, but does anyone know if I can pass dual IP Addresses and Ports on the connection string and if the MQ CLient under the hood handles the IBM MW Multi Instance queue management ?
PyMQI sits on top of the underlying MQ libraries. If you are using it with MQ v7.0 or higher then you can specify multiple connection names that are separated by a comma. It will then try each one in order and loop back to the first one if it can not connect to any of them. Some settings related to how long it will retry and how often can be set in the mqclient.ini.
The IBM Knowledge center page "Automatic client reconnection" has good general information on the reconnect options. All of it related to the C/C++ clients applies to PyMQI.
I am working on an integration project where we want to use JIRA tickets for business follow up operations. The JIRA (externally hosted) is not always available hence I want to use some Guaranteed delivery patterns. So the question, is it possible WSO2 ESB to use existing connectors (JIRA) in the message processor?
Message processors and connectors are independent. This is what you have to do (you are in right track at the moment too).
Put your message to a message store. This can be the in-memory message store (which looses messages upon a server restart) or a persistent message store such as an activemq queue.
Then, configure a message processor to consume messages from this store. There are two types of message processors namely forwarding and sampling processors. Here you need a sampling processor.https://docs.wso2.com/display/ESB490/Message+Processors
These consumed messages can be handed over to a sequence where the sequence can use the jira connector to create the jira.
Problem I see with this approach is, sampling processors do not support guaranteed delivery (but the forwarding processor do). But, AFAIK, we cannot use connectors with forwarding processors because we need to provide an endpoint in the forwarding processors configs.
You will understand the difference and the pros and cons of two types when you go through the docs. As a workaround, I can suggest following.
Create a proxy service which uses jira connector to create the jira
Then use the forwarding processor to send the consumed message to that proxy service.
I think, with above approach, you will be able to achieve guaranteed delivery.
We have developed a custom JAX-WS application that essentially achieves two things.
Exposes a few web service methods to perform some functionality.
Utilizes org.quartz.Scheduler to schedule and execute some polling tasks that monitors and processes data on a few database tables. (The logic here is slightly complex, hence a custom application was chosen over the use of WSO2 DSS)
This application is uploaded on WSO2 AS 5.2.1 and runs quite seamlessly. However, I'm unsure what will happen if we have to cluster the AS application server. Logically, I would think that each node will have its own instance of the custom application running within it, and hence its own scheduler. Would this not increase the risk of processing the same record, across both instances. Is my interpretation of the above scenario correct, from a clustering perspective?
Yes.You are correct.In cluster of app server nodes each nodes will have its own instance of the application.In your case each node will have seperate scheduler.You may consider using tasks from ESB 4.9.0. there WSO2 has added coordination support to work in cluster environment.
I am new to WSO2 ESB clustering, actually I'm still learning about it. I still don't understand the concept here.
In my case, I installed WSO2 ESB on 2 servers. My questions are:
1. Were both of the WSO2 ESB working as one application or as two separate applications?
2. If I configure one WSO2 ESB, will the other ESB have the same configuration?
3. If I configure a VFS proxy service the ESB to poll a file from specific directory, will it create a conflict when I poll a file? I think that both of the ESB will poll the same file.
Please enlighten me :) Thanks...
Check my answers inline.
Were both of the WSO2 ESB working as one application or as two separate applications?
In any cluster, applications or servers are working together to provide a high availability to the end users. It will work as one single server (application).
If I configure one WSO2 ESB, will the other ESB have the same configuration?
Yes. You can achieve this with the deployment synchronization. It will make sure all your changes are evenly distributed among other nodes of the cluster.
If I configure a VFS proxy service the ESB to poll a file from specific directory, will it create a conflict when I poll a file? I think that both of the ESB will poll the same file.
No. Since only one server is active at once. This should not be a problem.
You can learn more from the following link:
http://docs.wso2.org/display/CLUSTER420/Clustering+WSO2+Products
Were both of the WSO2 ESB working as one application or as two separate applications?
No, both ESBs are separate applications. The clustering is done mainly to ensure availability and scalability. So even a member in a cluster fails, others continue to operate.
If I configure one WSO2 ESB, will the other ESB have the same configuration?
Each ESB can get the same cluster configuration, but each will be separately identified by the LB that is fronting the cluster. Therefore, each member will get different IP addresses and even they can use different member-port to create cluster.
If I configure a VFS proxy service the ESB to poll a file from specific directory, will it create a conflict when I poll a file? I think that both of the ESB will poll the same file.
Each request is independently handled by separate ESB depending on the load balancer algorithm of the fronting LB. When two threads poll the same file, there can be conflicts. Since VFS transport deals with file operations, there are certain times that these can fail due to unavailability of some resources. In such a case, VFS transport is equipped with the following fault handling mechanism.
If you point the same directory in a clustered environment, both proxy services will try to poll files and cause issues. Therefore if you want to poll files in a clustered environment, best practice is to use inbound endpoints [1]. But if it is necessary to use proxy services you can apply the following property in your proxy, so that the proxy service will only operate in one server. Thus there will not be any conflicts between the two proxy services. Please refer [2] to further clarify this.
<parameter name="transport.vfs.ClusterAware">true</parameter>
[1]-https://docs.wso2.com/display/EI620/File+Inbound+Protocol
[2]-https://docs.wso2.com/display/ESB500/VFS+Transport