I have 2 servers. The first server has WSO2 APIM, BAM, BPS and GREG installed. These products are a MYSQL database. I would like to install APIM on the second server as well and utilize the same MYSQL database. How can I successful load balance the APIM without having a front-end load balancer? Is this a feasible solution?
Any help will be greatly appreciated!
In order to properly use computing resources of both of the physical servers you have to use a load balancer. If you are planning to use a software load balancer our recommendation is Nginx. Yes, you can utilize the same MySQL database server. When it comes to databases, there are some which can be shared among two API Manager instances. But you have to create few non-shared databases as well.
For more details about load balancing WSO2 API Manager, I would like to printout following articles.
WSO2 API Manager Scalable Deployment Patterns - Part 2
Clustering WSO2 ESB 4.9 with NginX
If you need further help regarding this issue please let me know.
Thanks,
Upul
Related
We have number of web services exposed over VPN to our partners for their consumption. I was wondering, what would be the best way to make those web services highly available and scalabe for their usage. One option could be an apache sitting between our web services acting like a reverse proxy. But, that would introduce a single point of failure too. Can we use physical load balancer? I was not able to find any useful resources for planning out this activity. Any thoughts/ideas?
I did not work with physical load balancer, but Apache is a valid solution in most of the scenarios.
All of our clients (with critical back-end system) uses apache as a load balancer without problemas.
Most of the Application Servers also provide their custom integration with apache, like mod_jk for Weblogic or mod_cluster for JBoss.
Is there support for Azure integration to deploy and manage WSO2 products, specificaly Elastic Load Balancer. I am also curious if JCloud and Apache Stratos support Azure as an IAAS ?
Thanks
--Mahesh
jclouds Azure support is in the works and scheduled to be released with version 2.0. You can track progress here: https://issues.apache.org/jira/browse/JCLOUDS-664
Once jclouds starts support Azure, this will indeed add Azure support in Apache Stratos, WSO2 Private PaaS, WSO2 App Factory - since these rely on jclouds for IaaS support.
For other WSO2 products, if you do not need the IaaS support and just want to deploy them on VMs in Azure (without autoprovisioning, autoscaling, etc.) you might be able to do so already. I have not tried using Azure ELB specifically but have configured WSO2 stuff with various different load-balancers (WSO2 ELB, AWS ELB, nginx, etc.) and they worked. So Azure ELB might as well, if not - you can probably run nginx in Azure just fine too.
We are planning to try out WSO2 Greg and wanted to know if we use external hardware load balancer, do we still need to configure manager + worker node configuration or we can start two standalone instances connecting to same back end i.e. oracle database and load balance it via front end external load balancer?
Thanks
You can use a third-party load balancer for WSO2 GREG cluster. You do not have to use manager/worker configuration for Axis2 clustering (subDomain property) as we use with WSO2 ELB.
You will still have to configure Axis2 clustering without subDomain property. I would recommend you to use wka membership scheme and nominate few nodes as wka members in the cluster.
I’m attempting to configure WSO2 ESB/GREG in a High Availability configuration, as follows:
Two GREG-ESB pairs, installed/configured on two separate Solaris servers.
Each server has an instance of GREG 4.5.3 (port offset 0) and ESB 4.7.0 (port offset 1) installed in separate installation directories.
GREG installations are configured to use ORACLE with jdbc DataSource, both connecting to the same database/schema, so adding something to one GREG is visible in the other.
ESB installations are configured with remote GREGs from above (each on the same server) and pointing to the same ORACLE database/schema for configuration/governance registry artifacts.
Tribes synchronization is enabled on all 4 installations.
We plan to use our own Load Balancer to round-robin traffic to either one or the other ESB with the idea that if one of the Solaris servers is down, we still have the full functionality on the other.
I couldn’t find an example of such HA configuration in WSO2 documentation.
The questions are:
. did anyone attempt such configuration (did it work)?
. is it even possible?
You can refer this documentation for information on deploying a cluster of Governance Registry instances to achieve load balancing and high availability.
To avoid the risk of system downtime due to failure of the ELB, you can implement a fail-proof ELB deployment. This documentation explains how to implement failover using two identical ELB setups running in active and passive modes respectively.
I want to make a cluster of Data Services Servers(DSS), and use an Enterprise Service Bus (ESB) as load balancer. In this deployment, what is the purpose of having a manager DSS in the cluster, and if there is a manager, is it a single point of failure?
These are the references which I used for load balancing and DSS clustering:
Dynamic load balancing between 3 nodes
How to install WSO2 Carbon cluster management feature?
The dynamic load balancing mechanism in WSO2 ESB, discovers the DSS members in an application group using a group communication framework and shares the load in runtime.
Load balancer is not bound or coupled to any cluster manager - it will simply distribute the load among nodes in applicationDomain.
So - in runtime - cluster manager doesn't create any single point of failure.
If you want you can setup a DSS cluster even without a cluster manager and distribute the load among the nodes via ESB.
The cluster manager - which is a component installed only to manage your cluster...
This is an extension to Prabath's answer.
DSS can be configured to work in a cluster. So that all DSS nodes act as members in a single cluster. This facilitates sharing session among each of the nodes.
Or else, you can have all DSS nodes running in isolation (using the same configuration), fronted by a load balancer (LB). Unlike the previous approach, this method does not support share sessions between DSS nodes. Thus only supports stateless services.
WSO2 ESB can act as a LB. But having a single instance of LB will make it a SPoF. And, LB can be configured to run in a cluster as well.
I don't know what's behind the decision of using an ESB instead of an ELB for LB, but it's up to you which one to use.
The manager is not a single point of failure, it's just a way to manage the entire cluster from a single management console (with limitations), and can be configured to be a worker at the same time.
Regarding the LB layer, you can use keepalived to avoid having a SPoF in the ESB acting as a LB, the same way it's done for WSO2 ELB's.
Take a look on that Failover for ELB with keepalived