I deployed a Governance Registry, a master Data Services Server and a slave Data Services Server according this tutorial (Strategy B with JDBC):
http://wso2.org/library/tutorials/2010/04/sharing-registry-space-across-multiple-product-instances#CO_JDBC_Strategy_B
Now, how can I add my data services (.dbs files) to Data Services Servers from Governance Registry?
So now since you've the master and slave nodes, the initial data services have to be put into the master node at the standard data services deployment directory, which is at $SERVER_ROOT/repository/deployment/server/dataservices/. So after you have all the data services there, you can use the new deployment synchronizer tool that is shipped with DSS 2.6.0 (or any Carbon 3.2.0 based product). The deployment synchronizer can be used to conveniently sync the deployment artifacts between the registry and the file system.
So, in the master node, simply goto the deployment synchronizer tool in the main menu and check-in the data. And after you do that, from the slave nodes, you can simply check-out deployment artifacts, which will copy the data services to the file system and they will be deployed. For more information, read the section under "Deployment Synchronizer" here.
Related
I have created AWS EKS Self-managed Kubernetes Runtime Fabric in Runtime Manager. It was in Active State when I created and were able to deploy mule applications on RTF (Deployed 13 mule applications successfully).
Suddenly, RTF is in 'disconnected' state. Nodes are 'healthy' but Create/Manage applications deployments is in 'degraded' state.
Please let me know troubleshooting steps I have to follow to bring the RTF in 'Active' status.
Is there a way to dynamically change the log levels in the API manager in a containerized environment, where the user cannot log in and change the values in log4j2.properties? I am aware that log4j2.properties gets hot deployed when we make changes, but how to do the same in a docker/K8s scenario.
There are multiple options to enable logs in the running servers. But in a server running in a container you can take the following actions,
Access required containers/pods and do the changes to log4j2.properties
Respawn a new cluster by modifying log4j2.properties
Configure logs of each node by accessing the management console(3.1.0 WUM only)[1]
Configure logs per APIs in each pod by using REST API(3.1.0 WUM only)[2]
[1] https://apim.docs.wso2.com/en/3.1.0/administer/logging-and-monitoring/logging/setting-up-logging/#enable-logs-for-a-component-via-the-ui
[2] https://apim.docs.wso2.com/en/3.1.0/administer/logging-and-monitoring/logging/setting-up-logging-per-api/
I am trying to setup WSO2 ESB on 2 nodes both sharing the same DB.
and also load balancer handling the loads across these 2 nodes.
Wondering if we really need to do clustering based on WKA scheme across these 2 nodes?
In ESB, synapse configurations are not stored in the DB, they are stored in the File System, So Yes HazleCast based clustering is required since the artifacts are synced between the nodes using a SVN based deployment synchronizer. When the manager node gets a new artifact (Say API, Proxy etc.) it will broadcast a syncing message to all the Worker nodes in the cluster, then worker nodes will checkout any new artifacts from the SVN. You can read more about this from here
If I have WSO2 API Manager running and want to add an extra node (say I have 2 and want to add a third), it seems that API xml files are not propagated to the synapse-configs directory.
Is there any way to synchronize the apis to a new node?
Similarly, if I have WSO2 API Manager running on a shared database and delete the instance but keep the DB, is there a way to restore the APIs from the DB?
Thanks.
If I have WSO2 API Manager running and want to add an extra node (say
I have 2 and want to add a third), it seems that API xml files are not
propagated to the synapse-configs directory.
Is there any way to synchronize the apis to a new node?
Deployment Synchronizer provides capability to synchronize deployment artifacts across the nodes of a product cluster. For your cluster to perform correctly, all nodes must have the same configurations.
All Carbon-based products, including WSO2 API Manager use Deployment Synchronizer (DepSync) to ensure the same status is maintained across all nodes in the cluster. It maintains a central repository of the <APIM_HOME>/repository/deployment/server folder, which is where deployment configurations are stored for all Carbon-based products, and uses that repository to synchronize the nodes.
Similarly, if I have WSO2 API Manager running on a shared database and
delete the instance but keep the DB, is there a way to restore the
APIs from the DB?
Since some of the parameters are kept in the file system, you can't restore the API only from DB.
I created a WSO2 ESB Cluster using WSO2 ELB using the reference
http://docs.wso2.org/pages/viewpage.action?pageId=26839403
Everything is fine. But i had a doubt in Load balancing. And tried the same in WSO2 AS.
I deployed a Sample JSP file with a sysout statement and deployed in the management console.
Now while hitting the jsp application, the sysout data is printed only on the management terminal console... There is no change in the other two worker node console...
Is clustering happened? if it is so, then how to find which worker node processed the request?
This can happen if you haven't configure a Deployment Synchronizer among the nodes. It'll be through this mechanism a manager node can share artifacts among worker nodes. If it's not enabled, the JSP page you have uploaded will only reside in the node you have uploaded it to. You can find more details about the Deployment Synchronizer at http://docs.wso2.org/display/Cluster/Configuring+Deployment+Synchronizer