Besides the syntax what is the core difference between a data mapper and payload factory? They both can convert/transform data from one format to another.
I have used the data mapper only a few times (you stick with what you know). In my opinion both mediators provide mostly the same functionality (as does the xslt mediator) but the underlying technology and mainly the development method is radically different.
datamapper provides a graphical way of transforming messages. It uses
existing output and input messages to seed the transformation so it is strong when you have the output of service A and the input of service B and just need to map the data from A to B.
payloadFactory is able to quickly build messages. I use it mostly to create requests where only a few fields need to be mapped from the original request to the new request.
xslt is a versatile and powerful way of transforming messages but it requires some experience. A lot of 3th party tooling is available to assist with the transformation.
Related
I would like to use Kafka as the kernel of my message-oriented middleware.
It means that, in addition to the transport of messages provided by Kafka, I would need payload transformation/enrichment between the payload sent by the producer and the payload received by the consumer. Indeed, in a lot of use cases, the format used by the producers will not be the most appropriate format for the consumers.
An obvious example would be having mainframe application producing COBOL copybooks as producer, and a node application optimized for JSON-based format as consumer! Or more commonly, two applications using XML syntax, but with different schemas.
In (traditional) message-oriented middleware, such payload transformation are usually performed by XSLT, which is a dedicated and performant language for transforming XML documents into other XML documents. And Saxon is one of the leading XSLT processor in the market.
Now, I am looking for advices/examples of how to integrate Saxon with Kafka... any hints/remarks/directions even questions would be appreciated!
Many thanks in advance!
The DBLookup construct is implemented as mediator in WSO2. Are there any reasons why this wasn't implemented as Customer Endpoint rather?
One way to think about EPs is to consider them a final destination for your data, while mediators are intermediate stops where they will be modified and/or enriched.
The DBLookup, in particular, was primarily tought as a way to enrich a given message with data recovered from a database (hence its name).
In theory, one could write a custom endpoint to send received messages directly a database. However, WSO2 has its DSS product which covers this kind of scenario and it is much more flexible.
I am fairly new to the subject and doing some research.
I have an ESB (using WSO2 ESB) and want to extract master data from the passing messages (like Customers, Orders, etc) and store them in DB to keep as a reference data. Source data is in XML coming from web services.
So there needs to be a component that will be able to maintain master data: insert new objects, delete old and update changed (would be also nice to have data events so ESB can route data accordingly).Basically, the logic will be similar for any entity type and it might be good idea to autogenerate it for all new entity types...
Options as I see them now:
Use Smooks with either SQLExecutor or Hibernate for persistence with all matching logic written either in smooks config or in DAO annotations
Use some open source ETL tool (like Talend, Kettle, Clover, etc). So the data will be passed to the ETL and all transformation logic is defined there. Also could accommodate future scenarios when they appear or can be an overkill..
.
Would appreciate if you share your thoughts and point me to the right direction.
You'd better to leave your database part to another tool.
If you have a fair amount of database interactions in your message flow, you can expect serious decreases in your performance.
However you do not need an ETL for the use case you explained. You can simply do it using WSO2 DSS by creating services to insert or update your data inside the database.
We have been using this for message logging purposes (inside DB) beside the ESB and are happy with that. It's better to use it as non-blocking fire-and-forget web services in your message flow within ESB. Hope this helps.
My next project involves the creation of a data API within an enterprise framework. The data will be consumed by several applications running on different software platforms. While my colleagues generally favour SOAP, I would like to use a RESTful architecture.
Most of the applications will only need a few objects at every call. Other applications will however sometimes need to make several sequential calls each involving thousands of records. I'm concerned about performance. Serialization/deserialization & network usage are where I fear to find a bottleneck. If each request involves a large delay, all of the enterprise's applications will be sluggish.
Are my fears realistic? Will serialization to a voluminous format like XML or JSON be a problem? Are there alternatives?
In the past, we've had to do these large data transfers using a "flatter"/leaner file format such as CSV for performance. How can I hope to achieve the performance I need using a web service?
While I'd prefer replies specific to REST, I'm interested in hearing how SOAP users might deal with this as well.
One advantage of REST is that you are free to use whatever media type you like. Why not continue to use text/csv? You could also enable HTTP compression to further reduce bandwidth consumption.
REST services are great for taking advantage of all different kinds of data formats. Whatever format fits your scenario best.
We offer both XML and JSON. Your mentioned rendering time really can be an issue. On server side we have JAXB whose standard sun-implementation is somewhat slow, when it comes to marshall XML. XML has the disadvantage of verbosity, but is also nice in interoperability and has schema + explicit versioning.
We compensated the verbosity in several ways (especially limiting the result-set):
In case you have a container with items in it, offer paging in your xml response (both page-size and page-number, e.g. /items?page=0&size=3) . The client can itself reduce the size by reducing the page-size.
Offer collapsing elements, for instance several clients are only interested in one data field of your whole item. Do this with a parameter (e.g. /items?select=name), then only the nested element 'name' is included inline of your item element. This dramatically decreases size.
Generally give the clients the power to use result-set limiting. They will definitley use it, because it speeds up response time also on their side :)
Also use compression, it reduces verbose XML extremely (in our case the payload got 10 times smaller). From client side you can do it by header 'Accept-Encoding: gzip'. If you use Apache, server configuration is also straight-forward
I'd like to offer three guidelines:
one is the observation that there are many SOAP Web services out there (especially built with .NET 2.0 "ASMX" technology) that send down their data transfer objects serialized in XML. There are of course many RESTful services that send down XML or JSON. XML serialization/deserialization is rarely the constraining factor.
one common cause of bottlenecks in Web services is an interface that encourages client applications to get data by making those thousands of sequential calls (there is a term for it: a chatty interface). This is what you should avoid when you design your Web service's interface, regardless of what four-letter acronym you decide to go ahead with.
one thing to remember about REST is that it (partially) stands for a transfer of state, which may be ill-suited to some operations where you don't want to transfer the state of a business object from the server to a client application. In those cases, a SOAP Web service (as suggested by your colleagues) is more appropriate; or perhaps a combination of SOAP and REST services, where the REST services would take care of operations where the state transfer is appropriate, and the SOAP services would implement the rest (pun unintended :-)) of the operations.
I'm using C# and I have windows form and web service...
I have a custom object that I want to send to the web service..
sometime, the object may contain a huge of data..
as a best performance, what is the best way to send a custom object to the web service?
Web Services are designed to handle custom objects as long as they eventually breakdown into some standard types. As per sending a huge data, there are MTOM and older DIME. If it's within LAN and against other .NET client, you might want to look into non-Web Services ways like Remoting or plain http.
See How to: Enable a Web Service to Send and Receive Large Amounts of Data.
If you are using / plan to use WCF within the network(as opposed to internet), named pipes on WCF is fast and simple. Use primitive types to pass objects. A string xml (although verbose) or a light weight binary object will do.
If it's a wsHttp webservice, use string, I can't think of any other way you would pass a custom object, unless the service knows about it.