We are planning to try out WSO2 Greg and wanted to know if we use external hardware load balancer, do we still need to configure manager + worker node configuration or we can start two standalone instances connecting to same back end i.e. oracle database and load balance it via front end external load balancer?
Thanks
You can use a third-party load balancer for WSO2 GREG cluster. You do not have to use manager/worker configuration for Axis2 clustering (subDomain property) as we use with WSO2 ELB.
You will still have to configure Axis2 clustering without subDomain property. I would recommend you to use wka membership scheme and nominate few nodes as wka members in the cluster.
Related
I'm trying to understand WSO2 APIM components and deployment scenarios but the terminology is confusing/vague for me. Clustering vs distributed deployments, profiles, and Port Offsets.
Basically I'd like to deploy a minimal 5 node setup where:
Node # (Location) Purpose
(DMZ) the GW (worker=True right?) and KeyManager
(DMZ) 2nd GW node (as above) for GW & KeyManager
(non-dmz) the Management Console, MySQL master
(non-dmz) the Publisher UI,TrafficManager, MySQL slave
(DMZ) the Store
Questions:
Should I use -DportOffset=0 on all nodes?
What -Dprofile=?? do I need to use on each of the 5 nodes?
The 2 gateway nodes will be load-balanced by an F5 load balancer
for incoming api-traffic. What port is used there, 9443 or 9763?
What ports need to be accessible on the DMZ hosts for this to work?
I assume 3306,9443,9763,8280,8243,7711, and 9999,11111 if JMX reqd
Please don't point me to the documentation, that's what is confusing me.
Running the key manager nodes, Store node in the DMZ is not recommended as they need db access. If you are using multi tenancy, you cannot host gateway worker nodes in the DMZ as well due to db access. What you can do is host those nodes in LAN and have a reverse proxy in the DMZ to expose the endpoints on the Gateway and Store. If you do not use multi tenancy, then you can run gateway worker nodes in the DMZ as dbs are not used.
As you are running multiple WSO2 servers in a single server you need to use port offsets to avoid conflicts. Default port offset is 0. You can run one WSO2 server with default port offset. For the other server you need to use port offset 1 or any value other than 0. You can start the server by giving the -DportOffset=1 at the startup. Best way is to change the value offset to 1 in /repository/conf/carbon.xml so that you do not need to provide the -DportOffset value at the startup.
-Dprofile is denote the profile which API Manager should start. If you start with -Dprofile=api-publisher, it would only starts the front end/backend features relevant to the API Publisher. Running product profiles are mostly recommended as it would only load relevant features of the profile. You can use profiles in your deployment as you are running 6 profiles of API Manager.
I think you are referring gateway worker nodes which serve API traffic. If so, it will use passthrough ports that are 8280(http) and 8243(https). Requests can serve using both. 9443 and 9763 are servlet ports are those will not used in gateway worker nodes and only in gateway manager node for service calls.
My recommendation is you should revise this setup as you are running nodes in DMZ which have db access.
Should I use -DportOffset=0 on all nodes?
It depends on how do you set up those nodes. If all of these servers in the same node (machine), you must use different port offset as all the API Manager servers use those port, so, there will be port conflicts.
What -Dprofile=?? do I need to use on each of the 5 nodes?
It will adjust the ports used by API Manager so that, there won't be any port conflicts between them if you are running on same node.
The 2 gateway nodes will be load-balanced by an F5 load balancer for
incoming api-traffic. What port is used there, 9443 or 9763?
For API requests/responses handling, you need 9763.
What ports need to be accessible on the DMZ hosts for this to work? I
assume 3306,9443,9763,8280,8243,7711, and 9999,11111 if JMX reqd
Yes, it's correct.
Further, you can use WSO2 support any issues you encountered.
I have 2 servers. The first server has WSO2 APIM, BAM, BPS and GREG installed. These products are a MYSQL database. I would like to install APIM on the second server as well and utilize the same MYSQL database. How can I successful load balance the APIM without having a front-end load balancer? Is this a feasible solution?
Any help will be greatly appreciated!
In order to properly use computing resources of both of the physical servers you have to use a load balancer. If you are planning to use a software load balancer our recommendation is Nginx. Yes, you can utilize the same MySQL database server. When it comes to databases, there are some which can be shared among two API Manager instances. But you have to create few non-shared databases as well.
For more details about load balancing WSO2 API Manager, I would like to printout following articles.
WSO2 API Manager Scalable Deployment Patterns - Part 2
Clustering WSO2 ESB 4.9 with NginX
If you need further help regarding this issue please let me know.
Thanks,
Upul
How to check if WSO2 ELB is working properly?
I have a ELB and 2 ELB(1 manager and 1 worker) running, I want to check if ELB is doing its work or not.
I want to check it using a SOAP request, SOAP endpoint should point to ELB or ESB?
I have configured ELB according to what is there in WSO2's documentation.
Thanks.
The WSO2 Elastic Load Balancer has been discontinued. You can download NGinx Plus [1] - the load balancer by NGinx - for which we provide support.
If you are currently using WSO2 ELB and need guidance, please visit our documentation page, Spacially Auto-Scaling in Load Balancer
In order to set up the WSO2 Elastic Load Balancer with one manager and one worker please refer document [1]
In order to check if WSO2 ELB is working properly, you can check it with autoscaling facilities in WSO2 ELB.
Please refer to document [2] for more information on autoscaling.
If you need to send a request to the ESB first you need to point it to ELB.
[1] https://www.nginx.com/resources/admin-guide/
[2] http://blog.afkham.org/2011/09/how-to-setup-wso2-elastic-load-balancer.html
I’m attempting to configure WSO2 ESB/GREG in a High Availability configuration, as follows:
Two GREG-ESB pairs, installed/configured on two separate Solaris servers.
Each server has an instance of GREG 4.5.3 (port offset 0) and ESB 4.7.0 (port offset 1) installed in separate installation directories.
GREG installations are configured to use ORACLE with jdbc DataSource, both connecting to the same database/schema, so adding something to one GREG is visible in the other.
ESB installations are configured with remote GREGs from above (each on the same server) and pointing to the same ORACLE database/schema for configuration/governance registry artifacts.
Tribes synchronization is enabled on all 4 installations.
We plan to use our own Load Balancer to round-robin traffic to either one or the other ESB with the idea that if one of the Solaris servers is down, we still have the full functionality on the other.
I couldn’t find an example of such HA configuration in WSO2 documentation.
The questions are:
. did anyone attempt such configuration (did it work)?
. is it even possible?
You can refer this documentation for information on deploying a cluster of Governance Registry instances to achieve load balancing and high availability.
To avoid the risk of system downtime due to failure of the ELB, you can implement a fail-proof ELB deployment. This documentation explains how to implement failover using two identical ELB setups running in active and passive modes respectively.
I want to make a cluster of Data Services Servers(DSS), and use an Enterprise Service Bus (ESB) as load balancer. In this deployment, what is the purpose of having a manager DSS in the cluster, and if there is a manager, is it a single point of failure?
These are the references which I used for load balancing and DSS clustering:
Dynamic load balancing between 3 nodes
How to install WSO2 Carbon cluster management feature?
The dynamic load balancing mechanism in WSO2 ESB, discovers the DSS members in an application group using a group communication framework and shares the load in runtime.
Load balancer is not bound or coupled to any cluster manager - it will simply distribute the load among nodes in applicationDomain.
So - in runtime - cluster manager doesn't create any single point of failure.
If you want you can setup a DSS cluster even without a cluster manager and distribute the load among the nodes via ESB.
The cluster manager - which is a component installed only to manage your cluster...
This is an extension to Prabath's answer.
DSS can be configured to work in a cluster. So that all DSS nodes act as members in a single cluster. This facilitates sharing session among each of the nodes.
Or else, you can have all DSS nodes running in isolation (using the same configuration), fronted by a load balancer (LB). Unlike the previous approach, this method does not support share sessions between DSS nodes. Thus only supports stateless services.
WSO2 ESB can act as a LB. But having a single instance of LB will make it a SPoF. And, LB can be configured to run in a cluster as well.
I don't know what's behind the decision of using an ESB instead of an ELB for LB, but it's up to you which one to use.
The manager is not a single point of failure, it's just a way to manage the entire cluster from a single management console (with limitations), and can be configured to be a worker at the same time.
Regarding the LB layer, you can use keepalived to avoid having a SPoF in the ESB acting as a LB, the same way it's done for WSO2 ELB's.
Take a look on that Failover for ELB with keepalived