I did a search on the board and there were some threads related to what I will ask but the other questions were not exactly like my situation.
I want to implement a service (ejbs) and different clients (rest api, webservice, jsf managed beans and maybe some other client) are going to use this service. My question is: in this scenario, where should data validation occur?
Seems reasonable to me to do it inside my business control (ejbs),- since I don't want to implement one validator type for each client- but I don't see people doing it...
best regards,
Oliver
The general advice would be: Every component, which exposes functionality to the outside should validate the input it receives. It should not hope for the best that it will in all cases receive valid input. Additionally, as you said, it keeps the validation at one place.
On the other hand it may be a reasonable decision when you have both sides under your control to decide for an early validation on the client and document the expected/required valid input data.
You have a similar problem when designing a relational database structure - you can have all sorts of constraints to ensure valid input data or you can check validity in the component storing the data in the database.
And, not to forget, whenever you validate in a deeper layer, all higher layers have to handle the exceptions or error messages when validation fails.
Regarding your specific question, the usage of the same service from different clients advises to validate within the service.
Related
In a typical client server model, what does it mean to subscribe or unsusbscribe to a feed? Is there a generic codebase or boilerplate model or set of standard procedures or class design and functionalities involved? This is all C++ based. There's no other info other than the client is attempting to connect to the server to retrieve data based on some sort of signature. I know it's somewhat vague, but I guess this is really a question of what are things to keep in mind and what a typical subscribe or unsubscribe method might entail. Maybe something along the lines of extending a client server model like http://www.linuxhowtos.org/C_C++/socket.htm.
This is primarily an information architecture question. "Subscribing to feeds" implies that the server offers a lot of information, which may not be uniformly relevant to all clients. Feeds are a mechanism by which clients can select relevant information.
Concretely, you first need to identify the atoms of information that you have. What are the smallest chunks of data ? What properties to they have? Can new atoms replace older atoms, and if so, what identifies their relation? Are there other atom relations besides replacement?
Next, there's the mapping of those atoms to particular feeds. What are the possible combinations of atoms needed by a client? How can these combinations be bundled in two ore more feeds? It is possible to map each atom uniquely to a single feed? Or must atoms be shared between feeds? If so, is that rare enough that you can ignore it and just send duplicates?
If a client connects, how do you figure out which atoms need to be shared? Is it just live streaming (atoms are sent only when they're generated on the server), do you have a set of current atoms (sent when a client connects), or do you need some history as well? Is there client caching?
It's clear that you can't have a single off-the-shelf solution when the business side is so diverse.
I'm looking for a possiblity to monitor all messages in a SOA enviroment with an intermediary, who'll be designed to enforce different rule-sets over the message's structure and sequences (e.g., let's say it'll check and ensure that Service A has to be consumed before B).
Obviously the first idea that came to mind is how WS-Adressing might help here, but I'm not sure if it does, as I don't really see any mechanism there to ensure that a message will get delivered via a given intermediary (as it is in WS-Routing, which is an outdated proprietary protocol by Microsoft).
Or maybe there's even a different approach that the monitor wouldn't be part of the route but would be notified on request/responses, which might it then again make somehow harder to actively enforce rules.
I'm looking forward to any suggestions.
You can implement a "service firewall" either by intercepting all the calls in each service as part of your basic servicehost. Alternatively you can use 3rd party solutions and route all your service calls to them (they will do the intercepting and then forward calls to your services).
You can use ESBs to do the routing (and intercepting) or you can use dedicated solutions like IBM's datapower, XML firewall from Layer7 etc.
For all my (technical) services I use messaging and the command processor pattern, which I describe here, without actually calling the pattern name though. I send a message and the framework finds to corresponding class that implements the interface that corresponds to my message. I can create multiple classes that can handle my message, or a single class that handles a multitude of messages. In the article these are classes implementing the IHandleMessages interface.
Either way, as long as I can create multiple classes implementing this interface, and they are all called, I can easily add auditing without adding this logic to my business logic or anything. Just add an additional implementation for every single message, or enhance the framework so it also accepts IHandleMessages implementations. That class can than audit every single message and store all of them centrally.
After doing that, you can find out more information about the messages and the flow. For example, if you put into the header information of your WCF/MSMQ message where it came from and perhaps some unique identifier for that single message, you can track the flow over various components.
NServiceBus also has this functionality for auditing and the team is working on additional tooling for this, called ServiceInsight.
Hope this helps.
I am writing a C++ API which is to be used as a web service. The functions in the API take in images/path_to_images as input parameters, process them, and give a different set of images/paths_to_images as outputs. I was thinking of implementing a REST interface to enable developers to use this API for their projects (independent of whatever language they'd like to work in). But, I understand REST is good only when you have a collection of data that you want to query or manipulate, which is not exactly the case here.
[The collection I have is of different functions that manipulate the supplied data.]
So, is it better for me to implement an RPC interface for this, or can this be done using REST itself?
Like lcfseth, I would also go for REST. REST is indeed resource-based and, in your case, you might consider that there's no resource to deal with. However, that's not exactly true, the image converter in your system is the resource. You POST images to it and it returns new images. So I'd simply create a URL such as:
POST http://example.com/image-converter
You POST images to it and it returns some array with the path to the new images.
Potentially, you could also have:
GET http://example.com/image-converter
which could tell you about the status of the image conversion (assuming it is a time consuming process).
The advantage of doing it like that is that you are re-using HTTP verbs that developers are familiar with, the interface is almost self-documenting (though of course you still need to document the format accepted and returned by the POST call). With RPC, you would have to define new verbs and document them.
REST use common operation GET,POST,DELETE,HEAD,PUT. As you can imagine, this is very data oriented. However there is no restriction on the data type and no restriction on the size of the data (none I'm aware of anyway).
So it's possible to use it in almost every context (including sending binary data). One of the advantages of REST is that web browser understand REST and your user won't need to have a dedicated application to send requests.
RPC presents more possibilities and can also be used. You can define custom operations for example.
Not sure you need that much power given what you intend to do.
Personally I would go with REST.
Here's a link you might wanna read:
http://www.sitepen.com/blog/2008/03/25/rest-and-rpc-relationship/
Compared to RPC, REST's(json style interface) is lightweight, it's easy for API user to use. RPC(soap/xml) seems complex and heavy.
I guess that what you want is HTTP+JSON based API, not the REST API that claimed by the REST author
http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven
I've been struggling with understanding a few points I keep reading regarding RESTful services. I'm hoping someone can help clarify.
1a) There seems to be a general aversion to generated code when talking about RESTful services.
1b) The argument that if you use a WADL to generate a client for a RESTful service, when the service changes - so does your client code.
Why I don't get it: Whether you are referencing a WADL and using generated code or you have manually extracted data from a RESTful response and mapped them to your UI (or whatever you're doing with them) if something changes in the underlying service it seems just as likely that the code will break in both cases. For instance, if the data returned changes from FirstName and LastName to FullName, in both instances you will have to update your code to grab the new field and perhaps handle it differently.
2) The argument that RESTful services don't need a WADL because the return types should be well-known MIME types and you should already know how to handle them.
Why I don't get it: Is the expectation that for every "type" of data a service returns there will be a unique MIME type in existence? If this is the case, does that mean the consumer of the RESTful services is expected to read the RFC to determine the structure of the returned data, how to use each field, etc.?
I've done a lot of reading to try to figure this out for myself so I hope someone can provide concrete examples and real-world scenarios.
REST can be very subtle. I've also done lots of reading on it and every once in a while I went back and read Chapter 5 of Fielding's dissertation, each time finding more insight. It was as clear as mud the first time (all though some things made sense) but only got better once I tried to apply the principles and used the building blocks.
So, based on my current understanding let's give it a go:
Why do RESTafarians not like code generation?
The short answer: If you make use of hypermedia (+links) There is no need.
Context: Explicitly defining a contract (WADL) between client and server does not reduce coupling enough: If you change the server the client breaks and you need to regenerate the code. (IMHO even automating it is just a patch to the underlying coupling issue).
REST helps you to decouple on different levels. Hypermedia discoverability is one of the goods ones to start with. See also the related concept HATEOAS
We let the client “discover” what can be done from the resource we are operating on instead of defining a contract before. We load the resource, check for “named links” and then follow those links or fill in forms (or links to forms) to update the resource. The server acts as a guide to the client via the options it proposes based on state. (Think business process / workflow / behavior). If we use a contract we need to know this "out of band" information and update the contract on change.
If we use hypermedia with links there is no need to have “separate contract”. Everything is included within the hypermedia – why design a separate document? Even URI templates are out of band information but if kept simple can work like Amazon S3.
Yes, we still need a common ground to stand on when transferring representations (hypermedia), so we define your own media types or use widely accepted ones such as Atom or Micro-formats. Thus, with the constraints of basic building blocks (link + forms + data - hypermedia) we reduce coupling by keeping out of band information to a minimum.
As first it seems that going for hypermedia does not change the impact of change :) : But, there are subtle differences. For one, if I have a WADL I need to update another doc and deploy/distribute. Using pure hypermedia there is no impact since it's embedded. (Imagine changes rippling through a complex interweave of systems). As per your example having FirstName + LastName and adding FullName does not really impact the clients, but removing First+Last and replacing with FullName does even in hypermedia.
As a side note: The REST uniform interface (verb constraints - GET, PUT, POST, DELETE + other verbs) decouples implementation from services.
Maybe I'm totally wrong but another possibility might be a “psychological kick back” to code generation: WADL makes one think of the WSDL(contract) part in “traditional web services (WSDL+SOAP)” / RPC which goes against REST. In REST state is transferred via hypermedia and not RPC which are method calls to update state on the server.
Disclaimer: I've not completed the referenced article in detail but I does give some great points.
I have worked on API projects for quite a while.
To answer your first question.
Yes, If the services return values change (Ex: First name and Last name becomes Full Name) your code might break. You will no longer get the first name and last name.
You have to understand that WADL is a Agreement. If it has to change, then the client needs to be notified. To avoid breaking the client code, we release a new version of the API.
The version 1.0 will have First Name and last name without breaking your code. We will release 1.1 version which will have the change to Full name.
So the answer in short, WADL is there to stay. As long as you use that version of the API. Your code will not break. If you want to get full name, then you have to move to the new versions. With lot of code generation plugins in the technology market, generating the code should not be a issue.
To answer your next question of why not WADL and how you get to know the mime types.
WADL is for code generation and serves as a contract. With that you can use JAXB or any mapping framework to convert the JSON string to generated bean objects.
If not WADL, you don't need to inspect every element to determine the type. You can easily do this.
var obj =
jQuery.parseJSON('{"name":"John"}');
alert( obj.name === "John" );
Let me know, If you have any questions.
I have web application with Presentaion layer,Business Layer,Data Access Layer. I am getting data by web service which is connected with my Data Access Layer.Means it is one of type of remoting i am using. Which exception i must handle in this scenario, in my DAL and Business Layer?
Please guide me.
I view a Webservice as just another form of Presentation layer. It should be using the same business layer components as your Web UIs, wherever possible.
Even in fairly basic REST style services, I try to always incorporate a basic Response wrapper around the requested data - this ensures that in the event of a failure, I can still return a response with an Error flag set, and hopefully some form of descriptive message.
I always try to ensure I'm not passing exception data from lower layers (eg DAL) as this can be a security issue. That exception data should generally be logged, however.
Layer architecture is notoriously poor at following the natural flow of your app, so, well, you have some choice there. I like to put this sort of code into the business layer, although I couldn't make a compelling case for it to save my life.
I think you should not handle any errors at Data Access Layer or Business Layer. You simply have to throw it to next layer, so at last you have an error/exception at Presentation Layer. All the error/exception should be handled at Presentation Layer, the reason behind it is..
You may change Presentation Layer in future and it will be easy and comfortable to handle it if you know the real error/exception
You can have a class handling each type of error at presentation layer and throw custom message to the user
Still, This is my opinion and there is no hard and fast rule for that. I am also agree with "Niko" to handle such things at Business Layer.
Following article may give more light on architecture (still it is not on error handling).
- http://www.codeproject.com/KB/cs/CLR_SP_Linq_n-tier.aspx