Aggregate messages from an <all> flow in Mule - web-services

The story so far
I have a SOAP service that sends its response (say Response_A) to an <all> flow. Inside the flow, there are three SOAP services (say B, C and D) that take inputs from Response_A. I am taking fields from Response_A and using XSLT, I can formulate requests for B, C and D.
Quick question: I am using <async> blocks inside <all> to process messages in parallel. The processing was not parallel when using <all> and <processor-chain> tags inside it. Any ideas why?
The roadmap
I will read the responses from all three B, C and D and combine them into a single response (probably using XSLT again) and send it to E.
The roadblock
After coming out of the <all> flow, I get a MuleMessageCollection. How to read it, and combine the messages into a single message?
My attempts
I tried aggregating the messages based on a correlation ID but I noticed that a correlation id is present only when the message from A was split by the <all> tag and was being sent to B, C and D. The correlation ID vanishes in the SOAP envelope that comes as a response from these services, even if I turned enableMuleSoapHeaders to true. I cannot modify the services. So, how do I make the correlation id appear on the SOAP response (provided a correlation id is absolutely necessary if I want to merge messages)
I will also need the group size to aggregate messages, I guess.
I even tried adding a correlation id using message property transformer, but it did not work that way. I was stuck with a MessageCollection and did not know how to read it, even though there were probably messages with correlation id's inside it.
So, it boils down to one question. What are the ways to merge messages from a MessageCollection?
I want to do this in xml, without writing a custom transformer in Java. Is it possible? What should be my approach?
Note: The response messages from B, C and D have different DOMs structures. The merged message that I want to create has a different DO from all of A, B, C and D's responses and requests.
If it helps, I a trying to work on a similar situation as described here: http://ricston.com/blog/?p=640 only difference is, I am using flows and the all tag.

The processing was not parallel when using and tags inside it. Any ideas why?
This is because all and processor-chain are synchronous by nature: they do not parallelize anything.
Now for your problem of aggregating remote asynchronous responses, if the remote service doesn't reflect back the Mule headers (which is the case for the vast majority of non-Mule powered service), you need to find out if you can use maybe a value inside the response payload that would be reflected back from the remote service (that could be a SOAP header or a field, like an ID, inside the SOAP body). If that is the case, you can configure the collection-aggregator with a expression-message-info-mapping that specifies that correlation will not be done using the Mule header but another source.
Otherwise, you'd rather keep the all block and make the calls one after the other...

Related

Getting list of completed tasks for a process-instance and list of tasks that are part of a process-definition

We are using Camunda REST-API.
Suppose there is a process-definition with work-flow as follows :
Start Event --> User-Task A --> User-Task B --> User-Task C --> End Event
Say, one of my process-instance is at user-Task B.
Is there any possible way (by calling Camunda REST-API) to know :
Completed tasks for a process-instance (User-Task A in the above case).
All the tasks that are part of the process-definition (User-Task A, User-Task B, User-Task C in the above case).
What I'm aware of:
One can get the bpm xml file and accordingly parse it to fetch ALL the tasks.
BPMN Model API can help us achieve the above same thing.
One can get current task using Task REST API.
Thanks.
Answering my own question since its encouraged...Note that community version has been used.
One can get the list of completed tasks through History REST API (/history/task) provided by Camunda using processinstanceid and finished (set to true) as query parameters.
History REST API offers many functionalities which can be explored further.
However, listing all the tasks can only be possible using Model API but they shall be unordered. Its not wise to order tasks since hard-coding the order itself defeats the use of BPM.
Algorithms like depth-first/breadth-first search, since the adjacency matrix shall need ordering of tasks wrt row/columns, would not help neither it would be wise to rely on topological sorting if graphs are cyclic.
Hope this helps somebody.

WS2ESB: Store state between sequence invocations

I was wondering about the proper way to store state between sequence invocations in WSO2ESB. In other words, if I have a scheduled task that invokes sequence S, at the end of iteration 0 I want to store some String variable (lets' call it ID), and then I want to read this ID at the start (or in the middle) of iteration 1, and so on.
To be more precise, I want to get a list of new SMS messages from an existing service, Twilio to be exact. However, Twilio only lets me get messages for selected days, i.e. there's no way for me to say give me only new messages (since I last checked / newer than certain message ID). Therefore, I'd like to create a scheduled task that will query Twilio and pass only new messages via REST call to my service. In order to do this, my sequence needs to query Twilio and then go through the returned list of messages, and discard messages that were already reported in the previous invocation. Now, to do this I need to store some state between different task/sequence invocations, i.e. at the end of the sequence I need to store the ID of the newest message in the current batch. This ID can then be used in subsequent invocation to determine which messages were already reported in the previous invocation.
I could use DBLookup and DB Report mediators, but it seems like an overkill (using a database to store a single string) and not very performance friendly. On the other hand, as far as I can see Class mediators are instantiated as singletons, therefore I could create a custom Class mediator that would manage this state and filter the list of messages to be sent to my service. I am quite sure that this will work, but I was wondering if this is the way to go, or there might be a more elegant solution that I missed.
We can think of 3 options here.
Using DBLookup/Report as you've suggested
Using the Carbon registry to store the values (this again uses DBs in the back end)
Using a Custom mediator to hold the state and read/write it from/to properties
Out of these three, obviously the third one will deliver the best performance since everything will be in-memory. It's also quite simple to implement and sometime back I did something similar and wrote a blog post here.
But on the other hand, the first two options can keep the state even when the server crashes, if it's a concern for your use case.
Since esb 490 you can persist and read properties from registry using property mediator.
https://docs.wso2.com/display/ESB490/Property+Mediator

REST API - Update of single resource changes multiple others

I'm looking for a way how to deal with a following problem:
Imagine you modify a resource and that subsequently causes update of other resources.
E.g. you issue a PUT to, say /api/orders/1234, which by definition changes state of all other Orders of given user. There may be UI clients that display the table of Orders and they should know that not only single item in the table was updated, but eventually other as well.
Now, is there any standard way how inform a clients about such a situation?
So far I can only think of sending back the 205 Reset Content HTTP status code to inform the client that he should refresh the state, as not just a single thing was changed.
There are multiple solutions.
You can define specific resources as non-cacheable, so the client does not cache them at all. (no-store)
You can try giving a max-age of 0, so the client will have to re-validate those resources always. In this case you might have to implement ETags and conditional GETs, but it will be easier on the server than option 1.
Some push method like WebSockets.
If you really want to "notify" potentially multiple clients of a change, then it sounds like you would need option 3.
However, correctly configured caching is normally good enough. For example you could mark not-yet-executed orders as not cached (max-age=0), but as soon as it is executed, you might mark it to be cached indefinitely, since it can not change anymore.

Two distinct responses from RESTful web service for a single call

How can I get 2 or multiple responses back from a CXF based RESTFul webservice for a single call.
For example : For this http://localhost:8080/report/annual, I would like to get 2 JSON reponses back. The first one will give me the information about the report details & some other information. The second reponse will give me the actual report JSON. If these 2 be delivered async that will be really good.
I'm with #flesk, this really isn't a REST approach, this is more of an async messaging approach.
The first call should return "someinfo" after it starts the "actualReport" processing (in a separate thread/process since "actualReport" is time consuming). Then make a second call for "actualReport" and make sure the timeout value on that call is set high enough to let the report processing complete.
You could get fancy and loop on the second call, returning a 404 until the report is complete.
There are a number of ways to get what you want, just not with one RESTful call.
You can't. Why would you want to do that anyway, when you can just return something like
{"someInfo": {...}, "actualReport": {...}}

(Java) How can I pass Schema-validated XML documents as parameters between distributed components (e.g. web services or sockets)?

Here is a description of the scenario and I would appreciate also any comments on the approach used
The core of my application is a set of web services backed by a P2P database. One service accepts a simple XML-based record (I have designed a generic schema for it). The service processes this data (mainly creating keys based on certain criteria) and pass the original data along with the created keys to a listening SocketServer in one of the listening P2P nodes. This key,data pair is routed to the proper node, which stores the data (associated with the key as an ID) in an XML database.
A second service accepts a query document that is structured based on the same schema, but with optional values that would be used for searching and matching from the previously stored ones. So the second service would pass this query (with the proper keys) to the P2P part, get back the results and pass them back to the service client.
E.g. if the original record submitted to the first service was < attr1 >value1 < /attr1 > < attr2 > value2 < / attr2 > (attribute list along with some other metadata mandated by the schema) then the second service should retrieve that record if the query received was < attr2 >value2 < / attr2 >
(I could later think about using more complex XPath or XQuery queries as the underlying XML database allows instead of exact matches for values here but that is not important at this stage. there is also a third service I am working on but it depends on getting the first two in proper shape first)
So my questions are:
1) What data type should I use as the parameters of the web services? How to utilize my schema for this usage? I was considering various XML binding frameworks (especially JAXB and SDO) for this but didn't know how to proceed.
2) How can I enhance the two services (call them store and search) to use dynamically created templates based on the original generic schema? The service would still accept documents of the main schema type but has the inner attribute list based on a template say template1 only requires whose values are ints while template2 require (float) and (string). The current JSP-based prototype manually creates this template but as an XML document that is assembled by hand (<>tags dispersed in text) and there is no type checking what so ever so I thought I could do better!
3) Is it possible to generate a quick web app prototype for simple access to this system (again by using the schema (&templates) to edit the appropriate XML message structures? What I am looking for is for the (human) user to choose a template and then just "fill in the blanks" and submit, no need for any fancy look and feel.
4) Can I or how can I also use this XML message type for communicating across sockets?
5) Does it matter if I deploy the services as stateless EJBs or not? Do I need them to be EJBs or servlets would be more than enough?
I currently have a rudimentary implementation (from previous developers) that were meant for a subset of my current requirements (I am improving on the services and adding new derived ones) but there was no schema nor validation and the data is passed all along as basic strings, thus providing weak typing and difficult to update manual parsing. The reason I want to update this to a stronger bound typing is to introduce changes in the data schema that would be passed along the whole system easily. Basically I want the system to be as less coupled to the data format/schema used as possible; the current prototype is too coupled to the data that I am finding it extremely difficult to change the data without breaking the system.
My initial investigation led me to consider JAXB but it supports only static typing (cannot create a schema/types dynamically at runtime that I want to persist for later usage). So I came across SDO which has both dynamic and static typing. The problem is just that there is not enough community and/or examples of using this approach so it seems risky (the examples of Apache Tuscany and Eclipselink implementations are very scarce and I could not find complete examples that are not 5+ years old (like this http://www.ibm.com/developerworks/java/library/j-sdo/) and also tackles the XML use case of SDO (most seem to focus on the relational usage of SDO).
This is my first time asking for programming help (here and elsewhere) so please bear with me. I searched a lot on the net but I could not find anything useful but pieces here and there that did not add up.
Any comment or hint is really appreciated.
trfndr
EDIT
I forgot one thing: how would the search service get back the results? Since it is opening a client socket connection, there is no way to get back any results synchronously. The current implementation tackles this by having the service client opening a listening socket on a random port and putting this contact info in the query document. After the search web service sends the query to the p2p part it finishes. The p2p sends the results as a WS call to another service which sends them back to the service client socket. I don't like this approach much, is there any more elegant solution?
I lead the EclipseLink JAXB & SDO implementations and represent Oracle on those specifications so hopefully I can help you out. This question is very similar to talk I'm giving at JavaOne in September.
1) What data type should I use as the
parameters of the web services? How to
utilize my schema for this usage? I
was considering various XML binding
frameworks (especially JAXB and SDO)
for this but didn't know how to
proceed.
This depend's on what web service framework you are using. JAXB is much easier to use with JAX-WS, and while JAXB is still easier to use with JAX-RS SDO, is a possible alternative.
2) How can I enhance the two services
(call them store and search) to use
dynamically created templates based on
the original generic schema? The
service would still accept documents
of the main schema type but has the
inner attribute list based on a
template say template1 only requires
whose values are ints while template2
require (float) and (string). The
current JSP-based prototype manually
creates this template but as an XML
document that is assembled by hand
(<>tags dispersed in text) and there
is no type checking what so ever so I
thought I could do better!
I'm not 100% what you mean here, but the following may be helpful:
Using #XmlAnyElement to Build a Generic Message
3) Is it possible to generate a quick
web app prototype for simple access to
this system (again by using the schema
(&templates) to edit the appropriate
XML message structures? What I am
looking for is for the (human) user to
choose a template and then just "fill
in the blanks" and submit, no need for
any fancy look and feel.
JAX-RS is a nice framework for creating quick prototypes. Below is an example I created:
Part 1 - The Database
Part 2 - Mapping the Database to Objects
Part 3 - Mapping the Objects to XML
Part 4 - The RESTful Service
Part 5 - The Client
4) Can I or how can I also use this
XML message type for communicating
across sockets?
I prefer frameworks like JAX-RS that communicate over the HTTP protocol.
5) Does it matter if I deploy the
services as stateless EJBs or not? Do
I need them to be EJBs or servlets
would be more than enough?
My preference is to use an EJB session bean for the service. If you are interacting with a database then you can leverage the Java Transaction API (JTA) to manage your database transactions.
Part 4 - The RESTful Service
SDO
EclipseLink is the SDO 2.1.1 (JSR-235) reference implementation. We have some examples posted below. If you are looking how to do something specific, I will try to post a relevant example.
http://wiki.eclipse.org/EclipseLink/Examples/SDO
JAXB
JAXB is static. It is also more popular than SDO. Recognizing this in EclipseLink we have implemented a dynamic JAXB feature. It gives you the dynamic aspect of SDO with a JAXB slant.
http://wiki.eclipse.org/EclipseLink/Examples/MOXy/Dynamic
EDIT #1
Since you are dealing with JAX-WS and your model is almost entirely dynamic, I think you should skip the JAXB binding altogether. In the following link see the section "Switching Off Data Binding"
http://java.sun.com/developer/technicalArticles/xml/jaxrpcpatterns3/
This will give us the body of the message as a javax.xml.transform.Source object. We will need to process the XML based on the dynamic templates. SDO would be a good choice here. You can constantly add new types to the HelperContext using XML schemas.
helperContext.getXSDHelper().define(schema1, null);
helperContext.getXSDHelper().define(schema2, null);
You wil be able to unmarshal the Source from the web service as follows:
XMLDocument doc = helperContext.getXMLHelper().load(source, null, null);
DataObject rootDataObject = doc.getRootObject();
String someValue = rootDataObject.getString("attr3/childAttr/anotherChildAttr");
You will also be able to use the XMLHelper to marshal your objects to XML when calling another service.