I am wondering do we need to setup a separate ESB analytics for all the worker nodes. For example, we have one manager node and four worker nodes in high availability cluster. In this case, do we need to setup 5 separate ESB analytics node for all 5 ESB instances or single analytics can be used for all the instances? If we setup 5 analytics instances and it will be difficult to handle. What is the best approach to setup high availability cluster using ESB 5.0? WSO2 documentation does not provide any information about setting up a cluster for ESB 5.0 runtime and analytics.
You do not need to add separate analytics nodes for ESB runtime cluster worker nodes. As ESB worker nodes manage the whole cluster by itself, just use one analytic instance for statistics.
Related
As per my current understanding and the documentation, the single node instances of API Manager v2.6.0 (All-in-one API Manager) can be set up to run in an active-active pattern. Is it not possible to run them in an active-passive pattern?
You can. It's completely up to the loadbalancer to decide.
I have to create and configure a two node WSO2 EI cluster. In particular I have to cluster an ESB profile and MB profile.
I have some architectural doubts about this:
CLUSTERING ESB PROFILE DOUBTS:
I based my assumptions on this documentation: https://docs.wso2.com/display/EI640/Clustering+the+ESB+Profile
I found this section:
Note that some production environments do not support multicast.
However, if your environment supports multicast, there are no issues
in using this as your membership scheme
What could be the reason for not supporting multicast? (so I can inform about possible issues with it). Looking into the table (inside the previous link) it seems to me that possible problem could be related to the following points:
All nodes should be in the same subnet
All nodes should be in the same multicast domain
Multicasting should not be blocked
Is obtaining this information from system\network engineers enough to decide whether to proceed with the multicast option?
Using multicast instead of WKA, would I need to do the same configuration steps listed in the first deployment scenario (the WKA based one) related to the "mounting registry" and "creating\connecting to databases" (as shown in the first documentation link)?
Does using Multicast instead of WKA allow me to not stop the service when I add a new node to the cluster?
CLUSTERING MB PROFILE:
From what I understand, MB profile cluster can use only WKA as membership scheme.
Does using WKA mean that I have to stop the service when I add a new node to the cluster?
So at the end can we consider the ESB cluster and the MB cluster two different clusters? Does the ESB cluster (if it is configured using multicast) need the service to be stopped when a new node is added while the MB cluster is stopped to add a new one?
Many virtual private cloud networks, including Google Cloud Platform,
Microsoft Azure, Amazon Web Services, and the public Internet do not
support multicast. Because such a platform does not support multicast.
If you configure wso2 products with multicast as the membership shceam it will not work as expected. That is the main reason for the warning in the official documentation.
You can consider the platform capability and chose any of the following membership schemes when configuring Hazalcast clustering in WSO2 Products.
WKA
Multicast
AWS
Kubernetes
Other than WKA the rest of the options for membership schema does not require you to include all the IPs of the member's in the configuration. So newly introduced nodes can join the cluster with ease.
Even in the WKA membership scheme if you have at least one known member active you can join a new member to the cluster then follow the configuration change and restart the other services without any service interruption.
Please note with all the above membership scheme usages the rest of
the configurations related to each product are needed to successfully
complete the cluster.
Regarding your concern about Clustering the MB Profile,
You can use any of the above-mentioned membership schemas which matches your deployment environment.
Regarding the adding new members to WKA, You can maintain service availability and apply the changes to servers one by one. You only need at least one WKA member running to introduce a new member to the cluster.
WSO2 MB Profile introduces cluster coordination through an RDBMS. With this new feature by default, cluster coordination is not handled by hazelcast engine. When the cluster coordination through an RDBMS is dissabled is allow the hazelcast engine to manage cluster coordination
Please note when the RDMS coordination is used there are no server restarts required.
I hope this was helpfull.
I am trying to setup WSO2 ESB on 2 nodes both sharing the same DB.
and also load balancer handling the loads across these 2 nodes.
Wondering if we really need to do clustering based on WKA scheme across these 2 nodes?
In ESB, synapse configurations are not stored in the DB, they are stored in the File System, So Yes HazleCast based clustering is required since the artifacts are synced between the nodes using a SVN based deployment synchronizer. When the manager node gets a new artifact (Say API, Proxy etc.) it will broadcast a syncing message to all the Worker nodes in the cluster, then worker nodes will checkout any new artifacts from the SVN. You can read more about this from here
If I have WSO2 API Manager running and want to add an extra node (say I have 2 and want to add a third), it seems that API xml files are not propagated to the synapse-configs directory.
Is there any way to synchronize the apis to a new node?
Similarly, if I have WSO2 API Manager running on a shared database and delete the instance but keep the DB, is there a way to restore the APIs from the DB?
Thanks.
If I have WSO2 API Manager running and want to add an extra node (say
I have 2 and want to add a third), it seems that API xml files are not
propagated to the synapse-configs directory.
Is there any way to synchronize the apis to a new node?
Deployment Synchronizer provides capability to synchronize deployment artifacts across the nodes of a product cluster. For your cluster to perform correctly, all nodes must have the same configurations.
All Carbon-based products, including WSO2 API Manager use Deployment Synchronizer (DepSync) to ensure the same status is maintained across all nodes in the cluster. It maintains a central repository of the <APIM_HOME>/repository/deployment/server folder, which is where deployment configurations are stored for all Carbon-based products, and uses that repository to synchronize the nodes.
Similarly, if I have WSO2 API Manager running on a shared database and
delete the instance but keep the DB, is there a way to restore the
APIs from the DB?
Since some of the parameters are kept in the file system, you can't restore the API only from DB.
I created a WSO2 ESB Cluster using WSO2 ELB using the reference
http://docs.wso2.org/pages/viewpage.action?pageId=26839403
Everything is fine. But i had a doubt in Load balancing. And tried the same in WSO2 AS.
I deployed a Sample JSP file with a sysout statement and deployed in the management console.
Now while hitting the jsp application, the sysout data is printed only on the management terminal console... There is no change in the other two worker node console...
Is clustering happened? if it is so, then how to find which worker node processed the request?
This can happen if you haven't configure a Deployment Synchronizer among the nodes. It'll be through this mechanism a manager node can share artifacts among worker nodes. If it's not enabled, the JSP page you have uploaded will only reside in the node you have uploaded it to. You can find more details about the Deployment Synchronizer at http://docs.wso2.org/display/Cluster/Configuring+Deployment+Synchronizer