Wso2 bps process hangs without reason - wso2

I have simple bpl process with 3 invokes in loop. One of instances hang without any visible reason. So process is in active state but it is not executing any longer. Last logged activity is call to invoke. I search database and find out that both request and response are present in table ode_message and they looks correctly. But output variable from invoke in table ode_xml_data is not filled. There is no logs in bps from time when message arrived. Is any way to find out what happen wrong?
I'm try to use Wso2 BPS 2.1.2

Related

Design Issue for web service call out from salesforce

For the scenario
As a user, whenever I try to generate or fetch codes, :
If, while generating codes via PUT callout, the request fails, then the system should identify that the put callout has failed and should not do subsequent GET callout to the codes which were not even created in the first place.
If, while generating codes via PUT callout, the request is successful, the system should wait for a while (30 secs to 1 min) and should not poll the Service API very frequently.
I have written a code thats call the PUT callout than after success of Put , calling the GET callout in future to retrieve the codes
Expected result is -
When PUT callout is sucess , system should wait for 30sec to 1 min to GET callout and retrieve all the data and store it in salesforce using scheduler and batch.
You can't schedule in Salesforce on a second-level cadence. The smallest allowable increment for a Schedulable job is fifteen minutes. Salesforce asynchronous jobs are always executed based on server load and are in a queue; you cannot control the time of their execution to the second.
While some approximation of this pattern could potentially be achieved using a Queueable chain, this pattern is not at all suited to the Salesforce architecture and really should be delegated to a middleware platform.

Xamarin.Android handling connection failures when calling web service

We're developing warehouse app for picking items which sends requests to a web service on every item scan, e. g. to update the quantity scanned in DB. From the log files I saw thet every now and then the connection on android scanners is lost and that leads to item quantity not being updated or in worst case an app crash.
What would be the best way to handle such connection failures so that I can ensure that the call to web method was successul before continuing code execution? Should I define some variable which accepts response from the web method and repeat the call until success is returned? Or is there some smarter way?

Automate Suspended orchestrations to be resumed automatically

We have a BizTalk application which sends XML files to external applications by using a web-service.
BizTalk calls the web-services method by passing XML file and destination application URL as parameters.
If the external applications are not able to receive the XML, or if there is no response received from the web-service back to BizTalk the message gets suspended in BizTalk.
Presently for this situation we manually go to BizTalk admin and resume each suspended message.
Our clients want this process to be automated all, they want an dashboard which shows list of message details and a button, on its click all the suspended messages have to be resumed.
If you are doing this within an orchestration and catching the connection error, just add a delay shape configured to 5 hours. Or set a retry interval to 300 minutes and multiple retries on the send port if that makes sense. You can do this using the rule engine as well.
Why not implement an asynchronous pattern?
You make it so, so that the orchestration sends the file out via a send shape while initializing a certain correlation set.
You then put a listen shape with at one end:
- the receive (following the initialized correlation set)
- a delay shape set to 5 hours.
When you receive the message, your orchestration can handle it gracefully.
When you don't, the delay shape will kick in and you handle accordingly.
Benefit to this solution in comparison to the solution of 40Alpha will be that your orchestration will only 'wake up' from a dehydrated state if the timeout kicks in OR when the response is received. In the example of 40Alpha, the orchestration would wake up a lot of times, consuming extra resources.
You may want to look a product like BizTalk 360. It has those sort of monitoring and command built into it. I'm not sure it works with BizTalk 2006R2 though, but you should be thinking about moving off that platform anyway as it is going out of Microsoft support.

WSO2 delivery-garantee pattern implementation: doesn't work sampling processor with more than 20 attempts

I'm quite a newbie in WSO2 so sorry for the mistakes (and for my english too ... )
I need to implement a proxy with delivery-garantee pattern and here you are my solution (I'm started from this post http://charith.wickramaarachchi.org/2012/05/another-message-redelivery-pattern-with.html):
a proxy invoke an external service giving, as input, the initial
client message
if the external service is running all works fine and
the reply is given to the client
if the external service is down or generate a SOAP fault, I'll
put the message in a store (retry store), and then, using a sampling
processor (after a time "t"), I'll try again for "n" max attempts:
at any attempt, if the external service is down or generate a SOAP
fault, I'll put the message again in the retry store, and the
process is repeated
after "n" attempts, if the external service is still out of
service, the message is stored in another store (garbage store)
All works fine when I try to test with one message, but when I try to test with more messages (> 20 but this number is variable ... ), the sampling processor hangs completely, nothing is shown in the logs. Looking in the console, sometimes (but not always ...), the processor is off, deactivate and in this case, to restore, I need to undeploy, stop and restart, and then deploy again my .car.
NOTE: I've to use the sampling processor and not the forwarding processor because this processor, after "n" attempts deactive itself and I can't use it for my goals.
I can't put here the complete code because is too long, but I can give you a sample .car that you can deploy and execute on your WSO2 installation (to simulate the external service I've used the echo service ...).
Here you are the sample car that you can download
Thank you very much in advance: all suggestions are appreciated!!!
Cesare
Message Forwarding Processor
Retrieves the messages stored in a message store and reliably forwards them to a specified endpoint. This processor attempts to send one message at a time and it does not dequeue a message from the store until it receives a response from the target endpoint. Therefore this processor is ideal for implementing in-order delivery scenarios and guaranteed delivery scenarios.
Sampling Processor
Retrieves the messages stored in a message store and injects them to a given sequence at specified intervals. This processor utilizes the Quartz scheduler framework for periodically processing messages. This can be used to implement message rate throttling scenarios.
--> You can use the forwarding processor and configure it so that it will never be deactivated, just add this parameter : <parameter name="max.delivery.attempts">-1</parameter>

How can I force ColdFusion to stop rendering a page until a process invoked with <cfexecute> completes?

I'm working on a script that creates a MySQL dump via <cfexecute> and then FTPs the SQL script to another server. I've resorted to checking once per second to see if the filesize has changed, and if it has not changed within the past five seconds I assume it has completed.
This is fine for the current application, but eventually I would like to be able to import the SQL script on the second server and provide some sort of notification that it has completed.
Is there some way to track the status of a running process?
If not, is there a way to accomplish a full DB export and import via ColdFusion alone?
Actually you may not realize it, but when you call <cfexecute> without passing a timeout attribute it defaults to '0' timeout. And if you read the docs on <cfexecute> you'd see:
If the value is 0:
ColdFusion starts a process and returns immediately. ColdFusion may
return control to the calling page
before any program output displays. To
ensure that program output displays,
set the value to 2 or higher.
So I would suggest passing a higher value for timeout which will cause ColdFusion to wait for mysqldump to complete before moving on.
Reference
Check out Event Gateways[1] for one way to deal with asynchronous operations. There's a Directory Watcher gateway that comes with CF as an example.[2]
Barring that, create some sort of batch processing facility using CF Scheduled Tasks. Add the job to a database table and have a scheduled task periodically pull jobs out of the table and execute them, reporting on the result. A second scheduled task can detect that the first completed and carry out the next step of the process.
[1] http://help.adobe.com/en_US/ColdFusion/9.0/CFMLRef/WSc3ff6d0ea77859461172e0811cbec214e3-7fa7.html
[2] http://help.adobe.com/en_US/ColdFusion/9.0/Developing/WSc3ff6d0ea77859461172e0811cbec22c24-77f7.html