I am looking into clustering of the Wso2 IS in here.
Does it really serve the Failover case?
How many nodes are needed for a minimum load-balance scenario?
As per the documentation, clustering is configuration between 5.1 & 5.2. Isn't it possible to achieve with same versions / much older versions?
Documentation explains how to create two node cluster with active state (both nodes are active) and fronted by a load balancer.
If you need a failover scenario, you can have a node in active state (assuming one node can serve all of your requests) and one node in passive state. If active node fails, load balancer has to direct the traffic to the passive node. So this will serve the failover scenario. You don't need to do any configuration changes in IS level for this. This can be configured in the load balancer level.
Documentation explains clustering for IS 5.1.0 and 5.2.0 versions, but not between these 2 versions. Cluster can be created with only 1 version of server nodes, but not with nodes of different versions.
Related
I have the following doubt relating how to correctly set up a WSO2 MB cluster respecting the requirements of high availability. I am following this official guide: https://docs.wso2.com/display/EI650/Clustering+the+Message+Broker+Profile#ClusteringtheMessageBrokerProfile-Testingthecluster
So I will have a two nodes WSO2 MB profile cluster. Now my doubt is related to the high availability concept (basically: if a single node is not working, the cluster should still works).
I have two nodes cluster, each node run on a specific server having a specific IP address, something like this
NODE 1 with the IP: XXX.XXX.XXX.1
NODE 2 with the IP: XXX.XXX.XXX.2
So lets suppose that I want publish a message into a queue defined on this cluster. I suppose that I can send the message indifferently to one of these 2 nodes (correct me if I am doing wrong assertion).
If this is the situation: how can I guarantee the high availability requirement? Can I simply put both my nodes under a load balancer? so that if one node doesn't work, the request is directed to the other
Is it a correct way to handle this situation?
Yes if you have both of the EI instances with MB profile clustering where the two servers have cluster coordination configured on JDBC or Hazelcast level, with the above-mentioned approach you will guarantee the high availability of the service.
You can make sure to follow the below provide additional precautions to make sure both servers do not go down for the same reason at ones.
Have the two servers in two separate instances rather than having
offset in the same instance.
You can configure the load balancer to
work in Active-Passive or Active-Active mode if you are deployed on a
cloud provider like AWS you can add configurations to redeploy the instances of the health check is failed for a given amount of time.
Does anyone have any advice on how to minimize cross-az traffic for inter-pod communication when running Kubernetes in AWS? I want to keep my microservices pinned to the same availability zone so that microservice-a that resides in az-a will transmit it's payload to microservice-b also in az-a.
I know you can pin pods to a label and keep the traffic in the same AZ, but in addition to minimizing the cross az-traffic I also want to maintain HA by deploying to multiple AZs.
In case you're willing to use alpha features you could use inter-pod affinity or node affinity rules to implement such a behaviour without loosing high availability.
You'll find details in the official documentation
Without that you could just have one deployment pinned to one node and a second deployment pinned to another node and one service which selects pods from both deployments - example code can be found here
How is High Availability achieved with WSO2 ESB clustering .
Suppose there are 2 nodes clustered and there is a load balancer , what happens when a node which is handling few HTTP requests goes down , what will happen to the requests ? will they be lost or because of the clustering the pending requests will be moved to the other node in the cluster.
What needs to be done to achieve this . Can you please suggest ?
Thanks
HA will handle by the load balancer, not by the ESB.Basically if esb node failure happens load balancer or the invoking client should handle that situation. If LB and client haven't implemented to handle such a failure scenario, there will be a message loss.LB has to route new requests to the other available node.
WSO2 recommend using Nginx as the default load balancer.ESB clustering documentation can be found on cluster doc.
I’m attempting to configure WSO2 ESB/GREG in a High Availability configuration, as follows:
Two GREG-ESB pairs, installed/configured on two separate Solaris servers.
Each server has an instance of GREG 4.5.3 (port offset 0) and ESB 4.7.0 (port offset 1) installed in separate installation directories.
GREG installations are configured to use ORACLE with jdbc DataSource, both connecting to the same database/schema, so adding something to one GREG is visible in the other.
ESB installations are configured with remote GREGs from above (each on the same server) and pointing to the same ORACLE database/schema for configuration/governance registry artifacts.
Tribes synchronization is enabled on all 4 installations.
We plan to use our own Load Balancer to round-robin traffic to either one or the other ESB with the idea that if one of the Solaris servers is down, we still have the full functionality on the other.
I couldn’t find an example of such HA configuration in WSO2 documentation.
The questions are:
. did anyone attempt such configuration (did it work)?
. is it even possible?
You can refer this documentation for information on deploying a cluster of Governance Registry instances to achieve load balancing and high availability.
To avoid the risk of system downtime due to failure of the ELB, you can implement a fail-proof ELB deployment. This documentation explains how to implement failover using two identical ELB setups running in active and passive modes respectively.
I want to make a cluster of Data Services Servers(DSS), and use an Enterprise Service Bus (ESB) as load balancer. In this deployment, what is the purpose of having a manager DSS in the cluster, and if there is a manager, is it a single point of failure?
These are the references which I used for load balancing and DSS clustering:
Dynamic load balancing between 3 nodes
How to install WSO2 Carbon cluster management feature?
The dynamic load balancing mechanism in WSO2 ESB, discovers the DSS members in an application group using a group communication framework and shares the load in runtime.
Load balancer is not bound or coupled to any cluster manager - it will simply distribute the load among nodes in applicationDomain.
So - in runtime - cluster manager doesn't create any single point of failure.
If you want you can setup a DSS cluster even without a cluster manager and distribute the load among the nodes via ESB.
The cluster manager - which is a component installed only to manage your cluster...
This is an extension to Prabath's answer.
DSS can be configured to work in a cluster. So that all DSS nodes act as members in a single cluster. This facilitates sharing session among each of the nodes.
Or else, you can have all DSS nodes running in isolation (using the same configuration), fronted by a load balancer (LB). Unlike the previous approach, this method does not support share sessions between DSS nodes. Thus only supports stateless services.
WSO2 ESB can act as a LB. But having a single instance of LB will make it a SPoF. And, LB can be configured to run in a cluster as well.
I don't know what's behind the decision of using an ESB instead of an ELB for LB, but it's up to you which one to use.
The manager is not a single point of failure, it's just a way to manage the entire cluster from a single management console (with limitations), and can be configured to be a worker at the same time.
Regarding the LB layer, you can use keepalived to avoid having a SPoF in the ESB acting as a LB, the same way it's done for WSO2 ELB's.
Take a look on that Failover for ELB with keepalived