I have this issue: when I move a resource from one collection to another collection in shared registry using WSO2 G-REG, the resource is moved, but when I open the same registry window in WSO2 ESB Management console the resource is in old collection. Why it wasn't changed?
Are there any settings for this behavior? Etc., maybe in registry.xml?
Actually you need to do add the correct mapping configurations[1] in the registry.xml file in wso2 esb.
[1]
<mount path="/_system/governance" overwrite="true">
<instanceId>reggov</instanceId>
<targetPath>/_system/governance</targetPath>
</mount>
Remember if you mapped governance to governance as configuration[1] the rest of the collection comes under the target path(/_system/governance) will not be appeared.
Ex:The following mapping[2] will not make sense with the mapping [1].
[2]
<mount path="/_system/governance/abc" overwrite="true">
<instanceId>reggov</instanceId>
<targetPath>/_system/governance/xyz</targetPath>
</mount>
The real cause for your issue can be, the registry caching time of the ESB.
Based on the current implementation it takes around 15 minutes for an artifact deployed to the GREG to be synched to the ESB nodes. The reason should be the default cacheable time is set to 15 minutes.
Related
I am trying to setup Active-Active setup for the WSO2 API Manager by following this url:Configuring an Active-Active Deployment
Everything is working fine except step 5 where I am trying to setup NFS. I moved /repository/deployment/server folder to another drive. For e.g. at location:
D:/WSO2AM/Deployment/server
so that both nodes can share deployment folder together.
Now not knowing what config files to change to point deployment folder to location other than default, I made changes to carbon.xml and made changes to an element "RepositoryLocation" and set it to D:/WSO2AM/Deployment/server but uit looks like it is not enough. When I start the server, I get the following error messsage:
FATAL - SynapseControllerFactory The synapse.xml location .\.\repository/deployment/server/synapse-configs\default doesn't exist
[2019-03-12 15:54:49,332] FATAL - ServiceBusInitializer Couldn't initialize the ESB...
org.apache.synapse.SynapseException: The synapse.xml location .\.\repository/deployment/server/synapse-configs\default doesn't exist
I will appreciate if someone can help me setup NFS so that both nodes can share same deployment folder and I don't have to worry about syncing them through some other mechanism.
Thanks
After struggling for almost a day, I found a solution in WSO2's completely separate thread.
Enable Artifact Synchronization
In this thread, they are asking to create a SMB share (for Windows) for Deployment and tenants directory, for APIM purpose, we need to create SMB share for the directory /repositiry/deployment/server directory.
It is just one command away to create a symbolic link as seen below:
mklink /D <APIM_HOME>/repositiry/deployment/server D:\WSO2\Shared\deployment\server
We need to create symlink in both nodes to point to the same location.
Once done, no configuration changes needed on APIM side. It will work by default and you have following scenario configured.
We have a problem with our setup of WSO2 API Manager 1.10.0. We're using a clustered setup, with 3 gateway-worker-nodes and a manager node; separate store, publisher & key manager nodes (We recently updated from v1.8.0 to 1.10.0).
After the upgrade, every ~2 weeks, all our worker-nodes (and sometimes other nodes) heapdumps and crashes (pretty much at the same time).
Analyzing the heap dumps reveal:
28,509 instances of "com.hazelcast.nio.tcp.WriteHandler", loaded by "hazelcast" occupy 945,814,400 (44.42%) bytes
28,509 instances of "com.hazelcast.nio.tcp.ReadHandler", loaded by "hazelcast" occupy 940,796,960 (44.18%) bytes
with thread:
com.hazelcast.nio.tcp.iobalancer.IOBalancerThread # 0x7877eb628 hz.wso2.pubstore.domain.instance.IOBalancerThread Thread
We've not been able to search for a remedy. The logs tells us nothing other than the nodes getting OOM Exception. This happens on nodes with very little traffic and on nodes with very high traffic (different environments have the same behavior).
Anyone come across a similar behavior? Any ideas for going forward?
This did indeed turn out to be a memory-leak issue with Hazelcast. After upgrading to a later version, this problem stopped.
In order to upgrade Hazelcast, there's a bit of "trickery" to be done.
1) Download the WSO2 GitHub repo (or simply the pom-file) for your specific Hazelcast version here: https://github.com/wso2/orbit/tree/master/hazelcast
2) Change the Hazelcast version in this section of the POM (to your preferred version):
<properties>
<version.hazelcast>3.7.2</version.hazelcast>
</properties>
3) Build the package.
4) Deploy the built package as a Patch to your server.
This is a "work-around" as it's only possible to patch components with the same name as the ones already shipping with the product.
After adding a new policy and disabling an outdated policy at the PDP console, an action that displays correctly at the PDP Policy view, the connected PDP process using a Java client did not reflect the logic added by the new policy, still acting according to the older, disabled rules. We also tried to run "Clear Decision Cache" and" Clear Attribute Cache" widgets at the PDP Extension screen, and the PEP is still showing the same issue.
A graceful restart of the WSO2 did solve the error. The server is running WSO2 5.1 release. From an operational standpoint, the restart command is a rather disruptive action and should be avoided.
Are further configuration, or command options available at the WSO2 IS package to drop cache and dynamically refresh an active policy without causing disruption of ongoing services?
This is already tested and working scenario in 5.1.0.
As I understood, you wanted to edit a policy and should reflect that changes after you publish that new policy without doing any other operation, right ? Yes, when you publish a same policy again with new changes, it will replace the new policy in DB and cache in cluster as well. It should reflect at that time.
Actually the scenario described by Harsha is not the same as the one Claude asked. Changing the policy and publishing might work. But disabling or even deleting a policy from the PDP does not become effective unless the server is restarted.
There is a new ticket in jira:
Disabling/Deleting Policy from PDP Configuration does not work
What is the best / flexible WSO2 upgrade strategy?
Because now we are upgrading WSO2 DSS 3.0.1 to DSS 3.1.1, therefore there is some difficult changes in dbs file one by one
wso2dss-3.0.1
<data name="BASE_PERSON_DataService" serviceNamespace=
"http://company.mn/base/BASE_PERSON">
wso2dss-3.1.1
<data description="multiple services per each table" enableBatchRequests="false"
enableBoxcarring="false" name="BASE_PERSON_DataService"
serviceNamespace="http://company.mn/base/BASE_PERSON" serviceStatus="active">
What is the easy way, we have many data services (dbs files)?
Regards,
Eba
As far as I know, there is usually no standard migration tool or procedure available. Check that the newer version uses a compliant schema for the wso2 registry database and so on; maybe it's the same or you just need to create new additional tables. Sometimes you find things like migration scripts in the dbscripts folder. You should also check for differences in newer xml configuration files, and adjust your older custom configuration to the new format (usually few or no changes could be required). As far as the artifacts are concerned, I never heard of any way to convert them. If there are many of them, I would probably try some script and regex to batch modify and adjust them to the new format.
These are the steps you should follow if you are upgrading
Step 1 - Deploy artifacts {dbs/datasource/drivers}
Cappy the deployed data services from current installation to new installation by copying repository/deployment/server folder.(all dbs files are backword compatible so what ever worked in WSO2 DSS 3.0.1, should work on DSS 3.1.1) Also note you need to copy data source configuration properties if you have created carbon data sources therefore copy master-datasources.xml from repository/conf/datasources to the new installation.
Also Copy all the content of repository/component/lib to the new installation to ensure the the jdbc drivers are properly installed.
Step 2- Change the configuration files
Apply the same changes you have done to configurations files inside OLD_DSS/repository/conf to NEW_DSS/repository/conf (if you have done any such to any configuration files)
Note - If you have done registry mounting make sure you apply to the new installation as done before by changing relevant configuration files such as
carbon.xml,axis2.xml,user-mgt.xml,mgt-transports.xml
Can I do a hot deployment in WSO2 ESB. As an example I want add a new service / new route without restarting the ESB to minimize the service interruption.
If possible can you give any example.
If not possible can I know if it will be in future releases.
Hot deployment/hot update may take the system to inconsistent states if the updates are not properly coordinated. Therefore it is recommended to turn hot deployment and hot update off for production deployments.
More Details here