WSO2 ESB > Message Processor > Continuous operation Message Processor - wso2

When the backend is down, I would like to keep the Message Processor always active without having to drop any pending message and without having the necessity of reactivating the Message Processor manually.
I have three suggestions to solve. I think from better to worse. Can you send an example?
Use some kind of Quartz configuration file path to keep the Message Processor active always.
Could you give value '-1' to the max.delivery.attempts parameter to get a continuous execution of the message processor?
Set parameter max.delivery.attempts with a very large number.
Thanks

set max.delivery.attempts to -1, give a value (in ms) to client.retry.interval and your forwarding message processor will never be deactivated

Related

Any issues calling back end service from fault seqeunce

We have a requirement to call one of the back end service with call mediator from default fault sequence in case of times outs in normal end points. Do we always need to exit from the fault sequence in case of any time outs OR we can have a logic to call back end services from fault sequence.
Well, your use case is related to guaranteed delivery. I suggest you to use Message store and message processor combination to achieve this. There you can specify retry attempts. You simply need to use store mediator to store the message into a JMS store to which Scheduled Message Forwarding Processor is listening. The message processor will then send the message to the endpoint and send the response back. Also the message processor will ensure the guaranteed delivery too. You may find more information here. If you need to have a deep dive in to the message processor please refer my blog post at [2].
[1] https://docs.wso2.com/display/ESB490/Sample+702%3A+Introduction+to+Message+Forwarding+Processor
[2] http://ravindraranwala.blogspot.com/2015/09/message-processor-coordination-support.html

Message Processor Deactivate After FAULT

When there is a "FAULT" and "max.delivery.attempts" processes the configured number of times and even then the process continues in "FAULT" logo in the following section "Message Processor Turns" without manual intervention it activates again? The fact that the "Message Processor" DISABLED can not impact the reading of new messages in JMS queue.
Since the Message store and process story implemented in the way to served as First Come First Out basis it is not possible to skip the message that got fault and continue the message flow.
Nevertheless up coming release has a new improvement where you can drop the message out from the queue after x number of fail attempts. Having said that, it is not good practice while you do the schedule and process.
To understand further about Message-stores and Message-processors read on the given article
In order to avoid this situation you can use the Sampling processor and send the message to back-end. Sample process will immediately remove it from the queue and process further. If the delivery of the message is failed or if you find fault you can re added in to store in Fault sequence.

What is optimal value for Phusion passenger PassengerMaxRequestQueueSize

I know this depends on the box hardware, but for example if there are set 100 processes, the default queue is also 100. Does it makes sense to increase PassengerMaxRequestQueueSize to 200 or 300? Probably this depends on free memory. Thoughts?
The best answer will be explaining the setting and probably one or two examples, assuming the server process requests for 2-3 seconds.
Thanks in advance!
Why you should limit queuing
Any requests that aren't immediately handled by an application process, are queued. Queuing is usually is bad: it often means that your server cannot handle the requests quickly enough.
A larger queue means that requests are less likely to be dropped. But this comes with a drawback: during busy times, the larger the queue, the longer your visitors have to wait before they see a response. This causes them to click reload, making the queue even longer (their previous request will stay in the queue; the OS does not know that they've disconnected until it tries to send data back to the visitor), or causes them to leave in frustration.
So having a limit on the queue is a good thing. It limits the impact of the above situation.
You should ensure that requests are queued as little as possible. That could mean:
Making your app faster (if your workload is CPU bound).
Upgrading to faster hardware (if your workload is CPU bound).
Increasing your app's concurrency settings (if your workload is I/O bound), e.g. by increasing the number of processes or threads.
If you cannot prevent requests from being queued, then the next best thing to do is to keep the queue short, and to display a friendly error message upon reaching the queue limit. Something like, "We're sorry, a lot of people are visiting us right now. Please try again later." The documentation for PassengerMaxRequestQueueSize tells you how to do that.
Optimal value for the queue size
It's hard to say what the optimal queue size should be. A good rule of thumb is: set the request queue size to the maximum number of requests you can handle in one second. Depending on your situation you may have to tweak things a little bit.
This rule of thumb comes from the notion of expected burst traffic. How many simultaneous requests do you expect on your server?
Suppose that your queue size is 100, and that for whatever reason you receive 150 requests at the same time. Suppose that your server is fast enough to handle 150 requests in half a second, so you know it's not a performance problem. But if you have a request queue size of 100, then 50 of those requests will be dropped with a "Request queue full" error.
In such a situation, you should set the queue size to the maximum number of concurrent requests that you think you can safely handle without performance issues.
This SO question and the Passenger docs here talk more about working with this. If you want more information about why this is happening on your server you can try running passenger-status (usually you need to run this as root).
If you would like to set a custom error page when visitors see this issue you can use the following (in Apache) to set a custom error page:
PassengerErrorOverride on
ErrorDocument 503 /error503.html
As mentioned by Hongli you can also change the setting PassengerMaxRequestQueueSize to a higher number to queue more requests. You can also set this to 0 and disable it (for most situations this is not an optimal solution however).
For reference, the default error message a visitor to your site will see when bumping against this limit is:
This website is under heavy load
We're sorry, too many people are accessing this website at the same time. We're working on this problem. Please try again later.

How to find who generated a windows message

We have a very large, complex MFC application.
For some reason a particular mode for running our application is generating WM_SIZE messages to the window. It should not be happening and is killing performance.
I can see the message getting handled. How can I find what or where in the code, is generating the window message?
Note: it tends to happen when we have a performance monitoring tool hooked into the application. So it might be the third party tool doing it.
But it only happens in this one particular mode of operation so it might be some sort of strange interaction.
You could see message map to specify for which all windows onSize has been mapped.
as an 'not elegant' alternative, you could trape WM_ONSIZE in PreTranslateMessage and see windows handle using hwnd member of pMsg structure being passed in PreTranslateMessage.
How would it help to know who sends the message? I would rather focus on a solution, such as delay processing of the message (assuming this processing is responsible for the perf hit) when an avalanche of such messages is detected.
e.g. If you receive too many messages within x milliseconds, you may decide to start a timer and process only the last message receives when the timer elapses. This way, you process max one message per x milli-seconds instead of each one.

Webservice protection against big messages

I am developing a WebService in Java upon the jax-ws stack and glassfish.
Now I am a bit concerned about a couple of things.
I need to pass in a unknown amount of binary data that will be processed with a MDB, it is written this way to be asynchronous (so the user does not have to wait for the calculation to take place, kind of fault tolerant aswell as being very scalable.
The input message can however be split into chunks and sent to the MDB or split in the client and sent to the WS itself in chunks.
What I am looking for is a way to be able to specify the max size of the input so I wont blow the heap even if someone delibratly tries to send a to big message. I have noticed that things tend to be a bit unstable once you hit the ceiling and I must be able to keep running.
Is it possible to be safe against big messages or should I try to use another method instead of WS. Which options do I have?
Well I am rather new to Java EE..
If you're passing binary data take a look at enabling MTOM for endpoint. It utilizes streaming and has 'threshold' parameter.