WSO2 Enterprise Integrator Clustering - wso2

When clustering WSO2 products, you create a database for the registry and other items that WSO2 product use for operations. With the combined WSO2 Enterprise Integrator, it consist of multiple elements (ESB, Business Process Manager, Message Broker, Analytics, and MSF4J).
Do you create different registry database for each sub-product or you use only one that is created for the first?
OPTION #1: WSO2_USER_DB, REGISTRY_DB, REGISTRY_LOCAL1, REGISTRY_LOCAL2
OPTION #2: ESB_WSO2_USER_DB, ESB_REGISTRY_DB, ESB_REGISTRY_LOCAL1, ESB_REGISTRY_LOCAL2, MB_WSO2_USER_DB, MB_REGISTRY_DB, MB_REGISTRY_LOCAL1, MB_REGISTRY_LOCAL2 ... etc.
I understand that user database can be shared since the authentication manager is similar. But is it the case with the registry database?
I'm new to clustering so this question might be a little not appropriate for advanced users.

WSO2 EI can offer various services, usually separately. For example WSO2 EI for integration or WSO2 EI for process automation.
When you install this product in clustering you do it under a specific role and not combined.
In essence you have local registry for each node and one shared for the synchronization of artifacts.
I hope it helps you.

Each of the profiles included in the EI is separate runtimes. You need to configure the profiles only according to your use case.
For example: If you are using Integrator profile (ESB) and MB profile (MB) you need to maintain two different registry data sources for ESB and MB as defined in your second option.
OPTION #2: ESB_WSO2_USER_DB, ESB_REGISTRY_DB, ESB_REGISTRY_LOCAL1, ESB_REGISTRY_LOCAL2, MB_WSO2_USER_DB, MB_REGISTRY_DB, MB_REGISTRY_LOCAL1, MB_REGISTRY_LOCAL2.
If you want to share the users across both applications, you can use one USER_DB instead of using two separate USER_DBs for ESB_WSO2_USER_DB and MB_WSO2_USER_DB.
EI clustering guide can be found from https://docs.wso2.com/display/EI610/Clustered+Deployment

Related

WSO2 physical server configuration issue

We're currently working/testing/experimenting on WSO2. My question is that does WSO2 provides any service if the physical server itself (on which WSO2 is hosted) shuts down for any possible reason?
I know there may be several MANUAL alternatives for that but does WSO2 have a particular feature for physical server migration?
Note: Please let me know of what you think before down voting.
This can be achieved by clustering which is supported by WSO2 products, please refer clustering documentation for more information [1]. Fail over and Switch over can be configured automatically. Also you can achieve High Availability (HA) with multiple redundant nodes.
[1] https://docs.wso2.com/display/CLUSTER44x/Overview

Is it possible to build a WSO2 distribution with ESB and api gateway?

I want to deploy only one carbon product (one JVM) containing the ESB component and the api gateway component.
Is it possible to build this kind of application ? Is there any reference documentation explaining how to do that ?
Thanks.
Every Carbon-based product comprise of a set of installable/uninstallable features. You can get an ESB pack and install API manager features on it (can do this via the management console UI). However you won't be able to put together mismatching versions of the features (features released for different carbon platform versions/components) so this is subject to availability of matching components. This documentation on feature management will give you some insights on how to do it. (There can be cases where components may be in total conflict though)
But if your requirement is just running two products on the same machine, you can consider running them with port offsets, which is easier.

Minimum clustering of API Manager with Internal Store

I am trying to set up a clustered environment for WSO2 API Manager. In the environment I need there is no need for an external store. I looking to start with the least amount of nodes and JVMs that yet is scalable with growing number of API requests.
Having looked at the WSO2 documentation
Clustering WSO2 API Manager and specifically the "Store and Publisher components in a single server node" model.
Some questions on this deployment model
Where is the Gateway Manager deployed?
I understand the publisher and store are on the same server node. Can they be run in the same JVM? If so would you use the default profile that also starts up KM and Gateway or soemthing else?
(Appologies but I can't post the image due to my low reputation value. I would have thought the image of the model would have helped)
Yes - API Store and Publisher will be running in the same JVM. As there is no profile for Store & Publisher (see [1] for available profiles), we need to start API Manager in the default profile. And yes it will start KM & Gateway components as well. But you can block (not expose) gateway ports. And regarding gateway manager, I guess one gateway node can act as both manager and worker in this deployment pattern.
[1] https://docs.wso2.com/display/AM180/Product+Profiles
As per the design publisher is a subset of store. So, If you start with profile api-store you will eventually get publisher as well. In this case you can start the server with following option.
-Dprofile=api-store

WSO2 Stratos - Multi-tenant application development

I am exploring the product WSO2 stratos ,watched some of the webinar recordings. I would like to create an application and expose it as SAAS.One of the webex recordings cover this in detail , but it is not explaining the multi-tenancy on data storage. Is there any tutorial available for the same ? I would like to use shared schema for data storage. What kind of database can i use for this ( For eg: MySql,MongoDB,Cassandra etc ) Is it possible to use some frame works like Athena ? I am just trying to do a kind of POC and then i need to decide whether this platform really fits for the application that i am thinking to build
You can create databases through WSO2 Storage Server in StratosLive which can be accessed via storage.stratoslive.wso2.com. You need to create a database and attach a user to it. Then you can access that database from your webapp (you will get a jdbc url) as you do it in normal cases. Also, you can create Cassandra keyspaces in the Storage Server. But we dont have the MongoDB support at the moment. There is no documentation on this yet.
Yes, you're right. Multi-tenant data architecture is up to the user to decide. This white paper from Microsoft explains multi-tenant data architecture nicely. The whitepaper however is written assuming you're using an RDBMS. I haven't played around with Athena so it's difficult to say how it'll map with what Stratos provides. The data architecture might be different when you're using a NoSQL DB and different DBs have different ways of filtering a set of data by a given tenant (or an ID). So probably going by the whitepaper it'll map to,
Different DBs -> Different keyspaces
Different tabeles -> Different column families
Shared schema -> Shared column family
Better to define your application characteristics before hand and then choose an appropriate DB

WSO2 CEP vs BAM

I am trying to understand the whole WSO2 SOA topology, but not able to understand
how the CEP and BAM fit together
Can CEP provide visual monitoring of processed events e.g. integration with WSO2 GS
Although WSO2 website says CEP is tightly integrated with BAM for post processing I couldnt
find any scenario explaining the same or how its done..( can CEP feed BAM ? how to configure the same)
Why would you have CEP + BAM together ? Any use case
Answers
All WSO2 projects are capable of integrating with each other because they are based on the same underlying platform (WSO2 Carbon). In this particular case, WSO2 CEP and GS. One way is, persisting processed results from CEP in a data store or file, and reading it from a Gadget backend so that the gadget (the frontend) can visualize it in the GS. If you want, you can install GS features (dashboard, gadget repo, etc) on top of CEP as well and use the same server runtime. But, for the latter it has to be based on the same Carbon version
This means, that the same data agent can send events to BAM as well as CEP. They both share the Thrift and REST APIs. Similar to 1, CEP and BAM can exist in the same runtime or can be downloaded and used separately. One related article is at here
The primary use case was processing the same event for real time analytics for CEP and a just-in-time (near real time) batch based processing for BAM. Ex: Processing up time related analytics for servers can be broken down to fit both servers. For CEP the query can do, Alert me a server does not respond for 3 requests in 30 secs. For BAM, you can plot the uptime trend within a hour/day/week.