I want to know about how request processing happens in Wso2 EI 7.1.0, because I am seeing different threads like synapse.threads.core, worker_pool_size_core, snd_t_core, lst_t_core in thread dump.
In my use case I created flow which contains API mediator-->Iterate mediator-->Send mediator--->Aggregate--->Respond. So here request will convert as soap to json while sending it to endpoint and response will convert as json to soap while respond it to client, I am using payload factory mediator for request /response format conversion and I observed more time taking for response conversion, So is their any configuration change required as of now I am using default configurations.
Thanks,
Ajay Babu Maguluri.
PassThroughMessageProcessor threads are used to mediate the message through the mediators. By default, this thread pool has 400 threads and you can tune the pool using the worker_pool_size_core, worker_pool_size_max parameters.
But if you are using clone or iterate mediators inside your mediation, SynapseWorker threads are used to handle the targets of those mediators. By default, this thread pool has 20 threads. You can tune the thread pool using synapse.threads.core and synapse.threads.max parameters
Related
Running WSO2 EI 6.2.0
I have a simple use case (Sequence) for WSO2 EI ESB:
Extract some parameters from the original request
Call an Async REST API
Extract an Execution ID from the Async Call Payload
Poll Loop another Sync API to check Execution Status based on Execution ID
Halt polling when the Sync API says that the request is completed
Extract some parameters from the last Sync Call
Response
My problem lies on the Poll a Sync API until it returns some parameter saying that the previous Async execution is finnished.
Is there any WSO2 EI Sequence mediator for this sort of Poll Loop?
The ESB mediations (sequences) are not really intended to keep the state and wait for anything. I'd believe it is even intention not having any sort "do/while" loop. We had a project requiring many polling steps and we used a process server to do so. So - with pure mediation it is very difficult to accomplish what are you asking for. Even you may check this one http://bsenduran.blogspot.com/2017/08/while-loop-in-wso2-esb.html
I will propose a few things you could do:
write a custom polling mediator (I really do not advice to do so)
use a process server (requires additional no-so-lightweight server)
use messaging with message processor (send a message to a queue, a message processor will poll, call and send the back to the queue or to response)
In all cases - if a client is waiting for a synchronous response, you need to finish the polling before the client times out. IMHO the best option return a message to a client (we are working on it) and avoid polling if possible..
I have a service which accepts HTTP requests from a customer site. The service then sends an HTTP request to a transactional email provider with information provided in the initial request to the service. The workflow looks like this:
CustomerSite ⟷ EmailService ⟷ TransactionEmailProvider
I can think of two possibilities for handling requests so that errors from the TransactionalEmailProvider can be reported to the CustomerSite.
The EmailService immediately sends an asynchronous request to
TransactionalEmailProvider when it receives a request from a
CustomerSite. The EmailService immediately responds to the
CustomerSite with a success code if the request was properly
formed. If a failure happened when sending a request to the
TransactionalEmailProvider, the EmailService sends a failure
notification using a POST request back to the EmailService using a
webhook implementation.
The EmailService sends a request to the TransactionalEmailProvider, and awaits a response before responding to the CustomerSite request with either a success or a failure.
Right now I'm implementing the first version because I don't want the responsiveness of the EmailService to be dependent on the responsiveness of the TransactionalEmailProvider.
Is this a reasonable way to process HTTP requests that are dependent upon a second level of HTTP requests? Are there situations in which one would be preferred over the other?
Is this a reasonable way to process HTTP requests that are dependent upon a second level of HTTP requests? Are there situations in which one would be preferred over the other?
It really depends on the system requirements, it depends on how you want to behave in case of failure of some of its components or under varying workload.
If you want your system to be reactive or scalable you should use asynchronous requests whenever possible. For this your system should be message driven. You could read more about reactive system here. This seems like your first option.
If you want a simpler system then use synchronous/blocking requests, like your option no. 2
I am using wso2esb4.7.0 and ActiveMQ 5.8.0 versions
i followed wso2esb docs they provided store and forward message store policy
But i dont want store i just want to consume those messages which is already has been store
by my client application i wish to pool that messages for every 5 sec
Is it possible in wso2esb jms using ActiveMq
will you write sample code of proxy
<messageProcessor name="Duplicate5" class="org.apache.synapse.message.processors.forward.ScheduledMessageForwardingProcessor" messageStore="Duplicate" xmlns="http://ws.apache.org/ns/synapse">
<parameter name="interval">1000</parameter>
<parameter name="message.processor.reply.sequence">fault</parameter>
</messageProcessor>
i tried with this but its not working
To pull messages from the queue, you need to use JMS transport..Check JMS proxy for the samples..
The message store persists the messages in the form of serializable Java objects. This might contain certain underlying information(E.g.: Properties) which is not visible in the message when you view message details in the ActiveMQ console. The message processor uses such information stored by the store when processing the message. Therefore, as far as I understand, the message store and message processor should be used together if you want to make things functional.
If you're straightaway storing the received messages in ActiveMQ queue, you might have to configure the message consumer manually. Check this usecase [1].
And also look in to this blogpost example to get an idea [2].
[1] http://docs.wso2.org/wiki/display/ESB470/ESB+as+a+JMS+Consumer
[2] http://nuwanwimalasekara.blogspot.com/2013/04/jms-proxy-service-using-wso2-esb.html
Hope this helps.
You can not use the Message-processor alone, You must have combination of Message Store and message processor. if you want to understand the behavior of the Message Store and Message processor refer the blog some time back.
If you want to use listen/pull from the JMS using ESB you have to use ESB as a JMS Consumer. Please refer further on document to implementation detail.
WSO2 ESB is not loading all proxies (more than 20). Then we increased following two values in the startup script and it worked:
-Dsnd_t_core=120
-Dsnd_t_max=600
But then we encountered several fatal issues of the WSO2 ESB. Several JMS proxies were blocked and did not consume anymore messages. The worst thing of all: NO ERROR in the carbon.log!
In addition the CPU load on the server went up to 100%.
A restart did not solve the problem, only deactivating scheduled tasks or proxies solved the problem.
We now discovered, that a VFS proxy is creating exactly 120 threads (JConsole). With each transport.PollInterval it creates a new Thread.
Which values do you use for the -Dsnd_t_core and max?
Why is a VFS proxy creating a new thread (see jconsole) a t each PollInterval?
As long as I know, WSO2 ESB thread is based on java.util.concurrent ThreadPool.
In this link you can read about some ThreadPool characteristics like when will it create a new thread, the queue mechanism, and reject task policy.
Which values do you use for the -Dsnd_t_core and max?
-Dsnd_t_core is the minimum number of threads inside a ThreadPool. So WSO2 ESB will automatically create thread as much as you set the -Dsnd_t_core. The default value is 20. WSO2 ESB will create 20 vfs-worker if you don't specify the -Dsnd_t_core.
-Dsnd_t_max is the maximum number of threads inside a ThreadPool. WSO2 ESB will stop create a new thread if the maximum number is reached.
Why is a VFS proxy creating a new thread (see jconsole) a t each PollInterval?
WSO2 ESB will create a new thread in these conditions :
If fewer than corePoolSize threads are running, the Executor always prefers adding a new thread rather than queuing.
If corePoolSize or more threads are running, the Executor always prefers queuing a request rather than adding a new thread.
If a request cannot be queued, a new thread is created unless this would exceed maximumPoolSize, in which case, the task will be
rejected.
So, as long as your queue is full and the maximum number of threads is not reached, WSO2 will create a new thread to handle a process. The PollInterval is set to specify the delay before your service start to poll the message or file from the source folder.
You can set the maxQueue number to unbound (-1) so the queue will never full and new thread will never be created.
I also found something from the JConsole. 1 service/1 proxy service will be handled only by 1 thread. I still trying to figure this out and make 1 service/1 proxy service is handled by 2 or more thread (multithread).
Hope this will help answer your question :)
I have a requirement to count the jetty transactions and measure the time it took to process the request and get back the response using JMX for our monitoring system.
I am using Jetty 8.1.7 and I can’t seem to find a proper way to do this. I basically need to identify when request is sent (due to Jetty Async approach this is triggered from thread A) and when the response is complete (as the oncompleteResponse is done in another thread).
I usually use ThreadLocal for such state in other areas I need similar functionality, but obviously this won’t work here.
Any ideas how to overcome?
To use jetty's async requests you basically have to subclass ContentExchange and override its methods. So you can add an extra field to it which would contain a timestamp of when the request was sent, and use it later in your onResponseComplete() method to measure the processing time. If you need to know the time when your request was actually sent to the server instead of when it was created you can override the onRequestCommitted() and onRequestComplete() methods.