I am trying to set up a clustered environment for WSO2 API Manager. In the environment I need there is no need for an external store. I looking to start with the least amount of nodes and JVMs that yet is scalable with growing number of API requests.
Having looked at the WSO2 documentation
Clustering WSO2 API Manager and specifically the "Store and Publisher components in a single server node" model.
Some questions on this deployment model
Where is the Gateway Manager deployed?
I understand the publisher and store are on the same server node. Can they be run in the same JVM? If so would you use the default profile that also starts up KM and Gateway or soemthing else?
(Appologies but I can't post the image due to my low reputation value. I would have thought the image of the model would have helped)
Yes - API Store and Publisher will be running in the same JVM. As there is no profile for Store & Publisher (see [1] for available profiles), we need to start API Manager in the default profile. And yes it will start KM & Gateway components as well. But you can block (not expose) gateway ports. And regarding gateway manager, I guess one gateway node can act as both manager and worker in this deployment pattern.
[1] https://docs.wso2.com/display/AM180/Product+Profiles
As per the design publisher is a subset of store. So, If you start with profile api-store you will eventually get publisher as well. In this case you can start the server with following option.
-Dprofile=api-store
Related
I have been trying to setup an active active setup for WSO2 API Manager. This has been elaborated here:
https://docs.wso2.com/display/AM210/Configuring+an+Active-Active+Deployment#Linux-Mac
Few observations:
The setup looks to be on two different nodes, with all components deployed on each node.
The setup indicates that Publisher should be pointed to one of the two nodes for both the nodes. If that is the case, lets say that node-1 (publisher) node goes down, how will second active instance help?
It recommends using NFS for content synchronization. NFS becomes a single point of failure in that case. Why is content synchronization needed though? Is it only for advanced siddhi query based throttling policies?
Finally, if I do two independent, all-components-in-one setup of API Managers with shared database and content synchronization using rsync/unison; but no throttling data publishing, what are the downsides?
Is this kind of setup fit for Active - Passive?
Thanks
If you use rsync or any deployment synchronization mechanism which is one way, this becomes a single point of failure. Most of the use cases, API publishing happens at the development time and this is actually a limitation.
That's why we can use NFS or file share mechanism. You can point the localhost and write the Synapse file to the file system. Then it shares among the two nodes. When you publish an API, Synapse artifact is created and deployed in the gateway node. In your case, one of the nodes.You can find a sample file in APIM_HOME/repository/deployment/server/synapse/default/api location.
If you disable throttling data publishing, i.e advanced throttling, your APIs can be accessed without any limitation. Simply there is no limit. But burst control and backend throttling will apply.
Yes, this fits for A-P. You can control A-A or A-P from the load balancer.
When clustering WSO2 products, you create a database for the registry and other items that WSO2 product use for operations. With the combined WSO2 Enterprise Integrator, it consist of multiple elements (ESB, Business Process Manager, Message Broker, Analytics, and MSF4J).
Do you create different registry database for each sub-product or you use only one that is created for the first?
OPTION #1: WSO2_USER_DB, REGISTRY_DB, REGISTRY_LOCAL1, REGISTRY_LOCAL2
OPTION #2: ESB_WSO2_USER_DB, ESB_REGISTRY_DB, ESB_REGISTRY_LOCAL1, ESB_REGISTRY_LOCAL2, MB_WSO2_USER_DB, MB_REGISTRY_DB, MB_REGISTRY_LOCAL1, MB_REGISTRY_LOCAL2 ... etc.
I understand that user database can be shared since the authentication manager is similar. But is it the case with the registry database?
I'm new to clustering so this question might be a little not appropriate for advanced users.
WSO2 EI can offer various services, usually separately. For example WSO2 EI for integration or WSO2 EI for process automation.
When you install this product in clustering you do it under a specific role and not combined.
In essence you have local registry for each node and one shared for the synchronization of artifacts.
I hope it helps you.
Each of the profiles included in the EI is separate runtimes. You need to configure the profiles only according to your use case.
For example: If you are using Integrator profile (ESB) and MB profile (MB) you need to maintain two different registry data sources for ESB and MB as defined in your second option.
OPTION #2: ESB_WSO2_USER_DB, ESB_REGISTRY_DB, ESB_REGISTRY_LOCAL1, ESB_REGISTRY_LOCAL2, MB_WSO2_USER_DB, MB_REGISTRY_DB, MB_REGISTRY_LOCAL1, MB_REGISTRY_LOCAL2.
If you want to share the users across both applications, you can use one USER_DB instead of using two separate USER_DBs for ESB_WSO2_USER_DB and MB_WSO2_USER_DB.
EI clustering guide can be found from https://docs.wso2.com/display/EI610/Clustered+Deployment
We have implemented Api-Manager Clustering and when we try to hit the api with large data some events are not captured by analytics
So we want to implement clustering for analytics - since there is no document available for apimanager - analytics clustering we have followed https://docs.wso2.com/display/CLUSTER44x/Clustering+Data+Analytics+Server+3.1.0.
We have implemented the Spark Deployment pattern - 2 node spark clustering but everytime in one node we are getting out of memory error
since the above one is not working we are planning to do the https://docs.wso2.com/display/CLUSTER44x/Fully+Distributed+High+Availability+Deployment+-+DAS+3.1.0 but it doesn't answer few question
How do they communicate each other - I mean some where we need to mention In reciver that Analyzer stays in some ip
And do the gateway need to point to reciever ?
Can Anyone help us
Thanks
I am using WSO2 APIM 1.10.0 on a single server deployment and would like to move to a clustering one. Looking at this documentation I could found a lot of information, howevre something is boring me; do I really have to always do all of it?
I mean, I don't want to split all my workers in multiple instances, all I want is configure two full setup configurations (key manager + publisher + store + gateway), each one on its own host and make sure I can put a load balance in front of it.
Thre requiremenst are simple: I would like to share the load on both of them, and guarantee a better availability in case of one of the hosts goes down. Is it a MUST break down the whole installation on both nodes so I have to start each component independently with offset ports configured?
I coud see that on version 2.0.0 a lot have been simplified, any way to reach the same on 1.10.0 one?
Regards
Splitting into profiles is not mandatory. This is designed in this way to scale API Manager based on the TPS. If you have a low TPS count and prefer to have 2 node HA setup, you can do the following.
Cluster the two nodes using wka, aws, etc.
Use dep-sync to share API artifacts between two nodes.
Use one node as the Publisher. You need to handle the publisher node traffic using single node. This is to avoid getting SVN conflicts.
You can serve API requests from both nodes.
You do not want to always use the same deployment pattern mentioned in the docuemtnation that you have pointed there. There are various Other deployment patterns that you can use according to the scalability and the requirement of yours.
Please refer the following documentation [1] for different deployment patterns you can use for WSO2 API Manager and [2] for more information on worker Manager separation and Load balancing.
[1] https://docs.wso2.com/display/CLUSTER44x/API+Manager+Deployment+Patterns
[2] https://docs.wso2.com/display/CLUSTER44x/Separating+the+Worker+and+Manager+Nodes
Why would anyone use WSO2 Application Server instead of other application servers?
I rather encountered only problems with it, mainly due to class loading issues, so I would appreciate if someone could point out what are the advantages or the use cases when using WSO2-AS really makes a difference.
I can see the benefits of other standalone WSO2 products, but as far as the AS is concerned, I would rather rely on more lightweight servers and just package the libraries I need.
There are number of advantages on WSO2 Application Server.
1.) It provides in-built support for multi-tenancy, in case if you have isolated departments like organization there is no real need to have number of server instances you could simply create a new tenant.
2.) Automatic lazy loading support for tenants, web applications and web services. In a production system a particular tenant/web application/web service can be ideal for sometime it's a waste to allocate hardware resources continuously to such ideal applications specially if you use IaaS. WSO2 application server can detect such ideal tenant/web application/web service and release their resources and tenant/web application/web service will load again when a new request dispatch to the particular tenant/web application/web service.
3.) Wide range of deployment options, support to deploy on-premise, public or private IaaS , public or private PassS such as Apache Stratos. An an example one can deploy his application into WSO2 App Cloud (http://wso2.com/cloud/app-cloud/) instantly without downloading anything, later he can get same experience one of above platforms.
4.) Deployment synchronization feature, a clustered environment you may have very large number of nodes and upgrading application version and configuration changes across the cluster can be headache. Using Deployment synchronization feature you can modify only one node labeled as manger node and Deployment synchronization will take care about synchronize changes across the cluster automatically and consistently.
5.) When developing applications on WSO2 Application Server you can leverage carbon platform level features such as identity, registry, logging, distributed caching, multi-tenancy etc. As an example one can use identity features provided by the platform to mange users, roles permissions also for authentication and authorization without write something own.
6.) Inbuilt support for security standards such SSO among other WSO2 products.
7.) In-build monitoring capability for web services and web application through WSO2 BAM.
8.) Enhanced and rich dashboard for applications and services which facilitate to basic statistics, application management, security wizards, code generations, Try -It tools, run time logging configurations etc.
9.) Enhanced classloading mechanism (starting from AS 5.1.0), within one Application server instance you can have number of virtual server environments per application level. As an example one can specify an application run on minimal Tomcat mode or can assign to run Carbon mode which is ( Tomcat + Carbon platform).
When it come to your specific issue if you can specify your Application Server version and elaborate more on your classloading issue I can provide you more specific answer.
Having said above I want to mention that I'm from WSO2.