If I have WSO2 API Manager running and want to add an extra node (say I have 2 and want to add a third), it seems that API xml files are not propagated to the synapse-configs directory.
Is there any way to synchronize the apis to a new node?
Similarly, if I have WSO2 API Manager running on a shared database and delete the instance but keep the DB, is there a way to restore the APIs from the DB?
Thanks.
If I have WSO2 API Manager running and want to add an extra node (say
I have 2 and want to add a third), it seems that API xml files are not
propagated to the synapse-configs directory.
Is there any way to synchronize the apis to a new node?
Deployment Synchronizer provides capability to synchronize deployment artifacts across the nodes of a product cluster. For your cluster to perform correctly, all nodes must have the same configurations.
All Carbon-based products, including WSO2 API Manager use Deployment Synchronizer (DepSync) to ensure the same status is maintained across all nodes in the cluster. It maintains a central repository of the <APIM_HOME>/repository/deployment/server folder, which is where deployment configurations are stored for all Carbon-based products, and uses that repository to synchronize the nodes.
Similarly, if I have WSO2 API Manager running on a shared database and
delete the instance but keep the DB, is there a way to restore the
APIs from the DB?
Since some of the parameters are kept in the file system, you can't restore the API only from DB.
Related
Is there a way to dynamically change the log levels in the API manager in a containerized environment, where the user cannot log in and change the values in log4j2.properties? I am aware that log4j2.properties gets hot deployed when we make changes, but how to do the same in a docker/K8s scenario.
There are multiple options to enable logs in the running servers. But in a server running in a container you can take the following actions,
Access required containers/pods and do the changes to log4j2.properties
Respawn a new cluster by modifying log4j2.properties
Configure logs of each node by accessing the management console(3.1.0 WUM only)[1]
Configure logs per APIs in each pod by using REST API(3.1.0 WUM only)[2]
[1] https://apim.docs.wso2.com/en/3.1.0/administer/logging-and-monitoring/logging/setting-up-logging/#enable-logs-for-a-component-via-the-ui
[2] https://apim.docs.wso2.com/en/3.1.0/administer/logging-and-monitoring/logging/setting-up-logging-per-api/
I am trying to setup WSO2 ESB on 2 nodes both sharing the same DB.
and also load balancer handling the loads across these 2 nodes.
Wondering if we really need to do clustering based on WKA scheme across these 2 nodes?
In ESB, synapse configurations are not stored in the DB, they are stored in the File System, So Yes HazleCast based clustering is required since the artifacts are synced between the nodes using a SVN based deployment synchronizer. When the manager node gets a new artifact (Say API, Proxy etc.) it will broadcast a syncing message to all the Worker nodes in the cluster, then worker nodes will checkout any new artifacts from the SVN. You can read more about this from here
I created a WSO2 ESB Cluster using WSO2 ELB using the reference
http://docs.wso2.org/pages/viewpage.action?pageId=26839403
Everything is fine. But i had a doubt in Load balancing. And tried the same in WSO2 AS.
I deployed a Sample JSP file with a sysout statement and deployed in the management console.
Now while hitting the jsp application, the sysout data is printed only on the management terminal console... There is no change in the other two worker node console...
Is clustering happened? if it is so, then how to find which worker node processed the request?
This can happen if you haven't configure a Deployment Synchronizer among the nodes. It'll be through this mechanism a manager node can share artifacts among worker nodes. If it's not enabled, the JSP page you have uploaded will only reside in the node you have uploaded it to. You can find more details about the Deployment Synchronizer at http://docs.wso2.org/display/Cluster/Configuring+Deployment+Synchronizer
I deployed a Governance Registry, a master Data Services Server and a slave Data Services Server according this tutorial (Strategy B with JDBC):
http://wso2.org/library/tutorials/2010/04/sharing-registry-space-across-multiple-product-instances#CO_JDBC_Strategy_B
Now, how can I add my data services (.dbs files) to Data Services Servers from Governance Registry?
So now since you've the master and slave nodes, the initial data services have to be put into the master node at the standard data services deployment directory, which is at $SERVER_ROOT/repository/deployment/server/dataservices/. So after you have all the data services there, you can use the new deployment synchronizer tool that is shipped with DSS 2.6.0 (or any Carbon 3.2.0 based product). The deployment synchronizer can be used to conveniently sync the deployment artifacts between the registry and the file system.
So, in the master node, simply goto the deployment synchronizer tool in the main menu and check-in the data. And after you do that, from the slave nodes, you can simply check-out deployment artifacts, which will copy the data services to the file system and they will be deployed. For more information, read the section under "Deployment Synchronizer" here.
Is is possible for an Azure application to offer a service to end-users for carrying out long-running computation tasks that are going to be distributed over multiple Workers (with persistent storage)?
And would it be possible to provide this through a web-service that is accessed by a desktop .Net application (the View) or do you always need to use a web-interface with Azure?
Azure easily handles WCF-hosting, and you can make your WCF endpoint either internal (for just an Azure hosted app) or external (for a locally-installed app). Try this: create a new Azure cloud application, and add a WCF Service Web Role. This will essentially host WCF in IIS, and will provide you with what you're looking for.
Also look at my response here for information about a patch needed for WCF hosting.
Finally: about distributed processing: if your processing is done as an atomic action, yet you simply want to scale how many things you can process, this is very straightforward! You just create a worker role that reads from a queue and processes the next item. Then, your WCF service simply enqueues a request for work to be done. When the worker role completes the task and writes its results to storage, it reads the next request. You can then scale your number of worker role instances to process requests across a set of VM instances. If, on the other hand, you want to process an individual work item across several worker roles, you'll need to create some type of custom mechanism for instructing your individual worker role instances. For this, you'll probably need to set up internal endpoints on each worker role, and in your WCF service, divide up the request among the enumerated worker role instances, and then sending a direct message to each instance with its specific assignment.