I am new to WSO2 ESB clustering, actually I'm still learning about it. I still don't understand the concept here.
In my case, I installed WSO2 ESB on 2 servers. My questions are:
1. Were both of the WSO2 ESB working as one application or as two separate applications?
2. If I configure one WSO2 ESB, will the other ESB have the same configuration?
3. If I configure a VFS proxy service the ESB to poll a file from specific directory, will it create a conflict when I poll a file? I think that both of the ESB will poll the same file.
Please enlighten me :) Thanks...
Check my answers inline.
Were both of the WSO2 ESB working as one application or as two separate applications?
In any cluster, applications or servers are working together to provide a high availability to the end users. It will work as one single server (application).
If I configure one WSO2 ESB, will the other ESB have the same configuration?
Yes. You can achieve this with the deployment synchronization. It will make sure all your changes are evenly distributed among other nodes of the cluster.
If I configure a VFS proxy service the ESB to poll a file from specific directory, will it create a conflict when I poll a file? I think that both of the ESB will poll the same file.
No. Since only one server is active at once. This should not be a problem.
You can learn more from the following link:
http://docs.wso2.org/display/CLUSTER420/Clustering+WSO2+Products
Were both of the WSO2 ESB working as one application or as two separate applications?
No, both ESBs are separate applications. The clustering is done mainly to ensure availability and scalability. So even a member in a cluster fails, others continue to operate.
If I configure one WSO2 ESB, will the other ESB have the same configuration?
Each ESB can get the same cluster configuration, but each will be separately identified by the LB that is fronting the cluster. Therefore, each member will get different IP addresses and even they can use different member-port to create cluster.
If I configure a VFS proxy service the ESB to poll a file from specific directory, will it create a conflict when I poll a file? I think that both of the ESB will poll the same file.
Each request is independently handled by separate ESB depending on the load balancer algorithm of the fronting LB. When two threads poll the same file, there can be conflicts. Since VFS transport deals with file operations, there are certain times that these can fail due to unavailability of some resources. In such a case, VFS transport is equipped with the following fault handling mechanism.
If you point the same directory in a clustered environment, both proxy services will try to poll files and cause issues. Therefore if you want to poll files in a clustered environment, best practice is to use inbound endpoints [1]. But if it is necessary to use proxy services you can apply the following property in your proxy, so that the proxy service will only operate in one server. Thus there will not be any conflicts between the two proxy services. Please refer [2] to further clarify this.
<parameter name="transport.vfs.ClusterAware">true</parameter>
[1]-https://docs.wso2.com/display/EI620/File+Inbound+Protocol
[2]-https://docs.wso2.com/display/ESB500/VFS+Transport
Related
Is there a common way to establish a network connection from a CloudFoundry-Service to a CloudFoundry App which the service is bound to.
In typical fashion apps receive their bind credentials and establish network connections to provisioned service for example databases.
It would be very handy to establish a connection from a service to an app, so the service could scrape endpoints that are provided by the app.
Any thoughts on this, why is it / or isn't it possible, why could it be a bad idea.
Normally, you have your service and the application receives credentials from the service through the service binding (i.e. VCAP_SERVICES).
You want to reverse this arrangement, which is fine, but the service will need to have some way to know how to reach the applications. The way to do this would be through routes bound to your application.
I have seen something like this done before, this is roughly the process. I'm sure you can adapt it to your requirements.
Create a service broker. The broker is responsible for managing service instances and service credentials. The broker is notified when an instance is created and when a binding occurs. Your broker will need to handle these requests.
The broker, in addition to its normal responsibilities, is going to need to maintain state indicating which applications have instances & bindings. In addition, the broker is going to need to use the org/space/app guids it's provided through the service broker API and talk to the CloudFoundry API to fetch the routes for the applications that are bound to it. You don't usually get these through the service broker API, but since you want to talk to the applications from the service, you need this information. It gives the service a way to communicate with the application.
Your broker may also provide the service in question (i.e. talking to applications), or it can delegate to some other process/container/VM to provide the service. If your service does the latter, then you need a way to a.) create the process/container/VM and b.) pass along the information it requires to talk to your application.
Obviously, you need to code the logic that will take the routes for applications that have created instances and bindings and communicate with them.
There can be some limitations with using the routes. First, not all routes are public. For internal routes, it would be kind of complicated to allow the broker/service to talk to the app. The broker/service would need to be an application on CF and you would need to specifically allow that communication (would require more API calls). Second, some apps just don't have routes. Perhaps this won't happen in your case, but it's worth considering. Lastly, not all routes are HTTP, some can be TCP as well. Your broker/service would need to handle both of those.
A variation on the above process, instead of using routes or talking to the API, you could have your broker/service provide some mechanism through the credentials to the application such that it registers itself with the broker/service. Thus when your applications start, they'll read the service info, register with the service and then go about their business. In this way, the application would have some additional flexibility about what information it provides when it registers with the broker/service. The downside is that the app has to do some work to be compatible.
We have developed a custom JAX-WS application that essentially achieves two things.
Exposes a few web service methods to perform some functionality.
Utilizes org.quartz.Scheduler to schedule and execute some polling tasks that monitors and processes data on a few database tables. (The logic here is slightly complex, hence a custom application was chosen over the use of WSO2 DSS)
This application is uploaded on WSO2 AS 5.2.1 and runs quite seamlessly. However, I'm unsure what will happen if we have to cluster the AS application server. Logically, I would think that each node will have its own instance of the custom application running within it, and hence its own scheduler. Would this not increase the risk of processing the same record, across both instances. Is my interpretation of the above scenario correct, from a clustering perspective?
Yes.You are correct.In cluster of app server nodes each nodes will have its own instance of the application.In your case each node will have seperate scheduler.You may consider using tasks from ESB 4.9.0. there WSO2 has added coordination support to work in cluster environment.
I'm interested in use apache synapse to monitor Apache ODE invocations, exists any configuration to redirect all Apache ODE calls changing the endpoint and adding WSA-TO header?
Exists any other way to do that just changing the apache ODE configuration?
I've been looking in ODE's documentation all references to redirections are modifying the the processes definitions (BPELs)
https://ode.apache.org/endpoint-references.html
Thanks
You are correct that BPEL's support for endpoint reference manipulation is something that is done within a business process as part of its execution. This is typically to support dynamic addressing by extracting endpoints from messages or constructing them from some data within the exchange. I wouldn't try to modify your process definitions to have knowledge of your monitoring requirements. This should be external and if done properly then completely declarative.
If you're using Apache ODE within ServiceMix then you should be able to handle this with a Camel route. Have all of the endpoints for your process deployment target a small camel route where you can listen in or T the message or whatever behavior you're trying to monitor.
If you're using Apache ODE within a simple web container, then you can still bind the endpoints externally from the process to be the endpoint of your choosing. See their deployment descriptor documentation for more info.
Currently we are using wso2 4.1.0 version. we are using soap based services authentication admin, entitlements service for getting policy decision , getting claims values. we are using cxf for our webservice clients. when we make calls with 500 concurrent users from a single machine everything works fine. but when we make it to 1000 concurrent users we are seeing a huge response time for these service calls from wso2 is. can you tell if there is any configuration change that we need to make for tuning. we increased the wso2 is axis client no. of connections per host but we still didn't see any improvement. By the way we are using the default configuration of wso2 is out of the box
Thanks
Kishore
There are some places you can improve the performances.
1.Increase the memory setting. you can find it in the wso2server script file in /bin. can u increase default value and see.. such as
-Xms1024m -Xmx2048m -XX:MaxPermSize=1024m
2.Increase max thread pool size in catalina-server.xml file which can find in /repository/conf/tomcat
maxThreads="250"
3.Increase caching timeout value. entitlement.properties file which can find in /repository/conf/security
4.Please check you are not authenticating for each request. You need to call AuthenticationAdmin 1st time and get the cookie and then use cookie for subsequence requests.
Else could do use some Jconsole or Jprofiler and see what is going wrong? Also according to your environment, this can be the max load that one server can handle, then you need to do horizontal scaling. (adding WSO2 Identity Server more instance in a cluster)
I'm very new at WSO2 stack and wonder when I should use WSO2 ESB proxy service and when – create business process via BPEL?
I think they are doing the same thing – performing a task via services composition and some mediation.
There is a fundamental difference between ESB and BPEL.
The role of ESB is to provide various non-functional properties to the business requests. ESB is thus used for e.g. mediation, transformation, security and virtualization/proxying of the requests. While it can do some simple message-enrichment using sequence diagrams, its primary purpose is to mediate messages between various services/hosts in the system.
On the other hand, BPEL is dedicated to implement business services and handle complex business workloads. Therefore, the role of the BPEL is to provide the functional properties to the business process- e.g. implementing the actual business process logic.
ESB and BPEL thus together deliver the separation-of-concerns which is often emphasized by component and service-oriented architectures.
If you have a well defined long running Business process you need to use WSO2 BPS. You can use WSO2 ESB for short spanning process with a shorter life cycle. WSO2 BPS has many integration points that you can control the Business process with features such as Human Tasks. On the other hand ESB has the capabilities but it may not be convenient and optimized as BPS for the long running well defined business processes.