Calling web service to web service; Architecture View - web-services

An interesting situation has came today as in the past too.
I have two web services. The transaction will be over after those two has done their own jobs. For this discussion I will name these web services as:
RLService
NavService
I cannot combine both services because the service 2 is owned by 3rd party. This 3rd party web service provides me a webmethod that read the xml file stored in that machine, where web service is, and process.
I have two options:
Client to Service**(S)**
Client call my web service, RLService, with xml data.
RLService store xml data to xml file and return the xml file path
Client call Navision web service, NAVService, with xml file path
NAVService return the result as xml path.
Client call my web service, RLService, with xml file path, which returns back with an Object to proceed further.
State Diagram:
Service to Services
Client call my web service, RLService, with xml data.
RLService store xml data to xml file
RLService call Navision web service, NAVService, with xml file path
NAVService return the result as xml path.
RLService, processed returned xml and convert to an Object to proceed further.
Client receive a class Result object.
State Diagram:
Here is how the physical architecture looks like:
I have followed both ways, but what is the right approach and why?
Of course, I know that the second solution is good. But what about patterns and best practices.

Q: Which approach is better?
A: The fewer round trips, the better.
A: The more you can do in one call (without making another, separate call), the better.
In other words (all things being equal), it looks like your second design is a LOT better than the first alternative.
IMHO...

If you put this logic in your service, obviously it decouples the client application from the needed workflow and makes sure the application/business logic is in your business service. In fact, it would be better to keep your client as thin as possible. So solution two would be the preferred solution (for me). You can still decide to make it an asynchronous operation, of course, that depends on your use case.
As a result, any change in logic only impacts your service, not the clients.

Related

How to ensure that a webservice whose output changes works?

I would like to ensure that our webservice works but I don't know how to do it because webservices data are controlled by a backoffice and data changes everyday multiple times.
The data loaded by the webservice doesn't come from a database but from json files dynamically loaded and distributed. I've considered replacing those files for testing the behavior, but bad data are a common frequent cause of disfunction, so I would rather tests those simultaneously or at least have some way to ensure that data are valid for the currently deployed sources.
I would also welcome suggestions of books too.
This is a big problem and it is difficult to find a single solution. Instead you should split task into smaller sub tasks:
Does web service work at all? Connect to it and make normal operations. If you are using real data, you cannot verify that it is correct. Just check you get a valid looking reply. You should also have a known set of data in a different server, maybe call it staging. Here you can verify that a new version web service gives out correct output.
How to check that files you get from backoffice are valid? It is not efficient to make you test them just before deployment. You mentioned several reasons why this is not possible so you have to live with it. Because your files are json, it should be possible to write a test suite that checks their validity.
How to check that real json files give out correct output in web service. This is your original question. You have a set of json files. How easy it is to calculate what web service responds based on these files? In some cases you would need to write your own web service engine. This is why testers usually do first two steps first.

Best practices for server-side architecture for an XSLT-based client application

I'm considering using Saxon CE for a web application to edit ebooks (metadata and content). It seems like a good match given that important ebook components (such as content.opf) are natively XML. I understand how to grab XML data from the server, transform it, insert the results into the HTML DOM, and handle events to change what and how is displayed.
Where I am getting stuck is how best to sync changes back up to the server. Is it best practice to use an XML database on the server? Is it reasonable to maintain XML on the server as text files, and overwrite them with a post, and could/should this be done through a result-document with a remote URI?
I realize this question may seem a bit open-ended, but I've failed to find any examples of client-side XSLT applications which actually allow the modification of data on the server.
Actually, I don't think this question is specific to using Saxon-CE on the client. The issues would be exactly the same if you were using XForms, or indeed if the client-side code were written in Javascript. And I think the answer depends on volumetrics, availability and concurrency requirements, and so on.
If you're doing a serious level of concurrent update of a shared collection of XML data, then using an XML database is probably a good idea. On the other hand there might be scenarios where this isn't needed, for example where the XML data is part of the user-specific application context, or where the XML document received from the client simply needs to be saved somewhere "as is", or perhaps where it just needs to be appended to a some kind of XML log file.
I think that in nearly all cases, you'll need a server-side component to the application that responds to HTTP put/post requests and decides what to do with them.

Simple ETL: Smooks or ETL product

I am fairly new to the subject and doing some research.
I have an ESB (using WSO2 ESB) and want to extract master data from the passing messages (like Customers, Orders, etc) and store them in DB to keep as a reference data. Source data is in XML coming from web services.
So there needs to be a component that will be able to maintain master data: insert new objects, delete old and update changed (would be also nice to have data events so ESB can route data accordingly).Basically, the logic will be similar for any entity type and it might be good idea to autogenerate it for all new entity types...
Options as I see them now:
Use Smooks with either SQLExecutor or Hibernate for persistence with all matching logic written either in smooks config or in DAO annotations
Use some open source ETL tool (like Talend, Kettle, Clover, etc). So the data will be passed to the ETL and all transformation logic is defined there. Also could accommodate future scenarios when they appear or can be an overkill..
.
Would appreciate if you share your thoughts and point me to the right direction.
You'd better to leave your database part to another tool.
If you have a fair amount of database interactions in your message flow, you can expect serious decreases in your performance.
However you do not need an ETL for the use case you explained. You can simply do it using WSO2 DSS by creating services to insert or update your data inside the database.
We have been using this for message logging purposes (inside DB) beside the ESB and are happy with that. It's better to use it as non-blocking fire-and-forget web services in your message flow within ESB. Hope this helps.

Is REST suitable for document-style web services?

RESTful and document/message-style seem to be two trends to implement web services nowadays in general. By this, I mean REST vs SOAP, and document-style vs RPC-style.
My question is how compatible REST is with document-style web services. From my limited knowledge of REST, it is utilizing http GET/POST/PUT/DELETE verbs to perform CRUD-like operations on remote resources denoted by URLs, which lends it into a more "chatty" and remote-method like style, aka RPC style. On the other hand, document-style web services emphasize on coarse-grained calls, i.e. sending up a batch like request document with complex information, and expecting a response document back also with complex information. I cannot see how it can be accomplished nicely with REST, without declaring only one resource for "Response" and using POST verb all the time (which will defeat the purpose of REST).
As I am new in both document-style and RESTful web services, please excuse me for, and kindly point out, any ignorance in above assumptions. Thanks!
Your understanding of REST is misguided. This is not surprising nor your fault. There is far, far more mis-information about REST floating around on the internet than there is valid information.
REST is far more suited to the coarse-grain document style type of distributed interface than it is for a data oriented CRUD interface. Although there are similarities between CRUD operations and the HTTP GET/PUT/POST/DELETE there are subtle differences that are very significant to the architecture of your application.
I don't think you mean REST over SOAP. It is possible to do REST over SOAP, but to my knowledge nobody does it, and I have never seen an article talking about it.
SOAP is usually used for "Web Services" and REST is usually done over HTTP.
REST is really meant to be used with documents as long as you consider your document a resource.
GET allows you to retrieve the document. Obviously.
POST allows you to create a document. No need for your API to require the full content of the document to create it. It is up to you to decide what is required to actually create the document.
PUT allows to modify the document. Again, no need to force the client to send the whole document each time he wants to save. Your API may support delta updates sent through PUT requests.
DELETE obviously deletes the document. Again, you can design your API so that deletes does not actually destroy every bits of the document. You can create a system similar to a recycle bin.
What is nice with REST and working with documents is that the server response contains every information needed to understand the response. So if a new resource is created, you should send its location, same if a resource is moved, etc. All you have to document is the data types that will be used (XML formats, JSON, etc.)
Standard HTTP methods are just there because their behaviour is already defined and allow clients to easily discover your API as long as they know the URI.

Passing a custom object to the web service

I'm using C# and I have windows form and web service...
I have a custom object that I want to send to the web service..
sometime, the object may contain a huge of data..
as a best performance, what is the best way to send a custom object to the web service?
Web Services are designed to handle custom objects as long as they eventually breakdown into some standard types. As per sending a huge data, there are MTOM and older DIME. If it's within LAN and against other .NET client, you might want to look into non-Web Services ways like Remoting or plain http.
See How to: Enable a Web Service to Send and Receive Large Amounts of Data.
If you are using / plan to use WCF within the network(as opposed to internet), named pipes on WCF is fast and simple. Use primitive types to pass objects. A string xml (although verbose) or a light weight binary object will do.
If it's a wsHttp webservice, use string, I can't think of any other way you would pass a custom object, unless the service knows about it.