Please Guide about Exceptions - web-services

I have web application with Presentaion layer,Business Layer,Data Access Layer. I am getting data by web service which is connected with my Data Access Layer.Means it is one of type of remoting i am using. Which exception i must handle in this scenario, in my DAL and Business Layer?
Please guide me.

I view a Webservice as just another form of Presentation layer. It should be using the same business layer components as your Web UIs, wherever possible.
Even in fairly basic REST style services, I try to always incorporate a basic Response wrapper around the requested data - this ensures that in the event of a failure, I can still return a response with an Error flag set, and hopefully some form of descriptive message.
I always try to ensure I'm not passing exception data from lower layers (eg DAL) as this can be a security issue. That exception data should generally be logged, however.

Layer architecture is notoriously poor at following the natural flow of your app, so, well, you have some choice there. I like to put this sort of code into the business layer, although I couldn't make a compelling case for it to save my life.

I think you should not handle any errors at Data Access Layer or Business Layer. You simply have to throw it to next layer, so at last you have an error/exception at Presentation Layer. All the error/exception should be handled at Presentation Layer, the reason behind it is..
You may change Presentation Layer in future and it will be easy and comfortable to handle it if you know the real error/exception
You can have a class handling each type of error at presentation layer and throw custom message to the user
Still, This is my opinion and there is no hard and fast rule for that. I am also agree with "Niko" to handle such things at Business Layer.
Following article may give more light on architecture (still it is not on error handling).
- http://www.codeproject.com/KB/cs/CLR_SP_Linq_n-tier.aspx

Related

ReST philosophy - how to handle services and side effects

I've been diving into ReST lately, and a few things still bug me:
1) Since there are only resources and no services to call, how can I provide operations to the client that only do stuff and don't change any data?
For example, in my application it is possible to trigger a service that connects to a remote server and executes a shell scripts. I don't know how this scenario would apply to a resource?
2) Another thing I'm not sure about is side effects: Let's say I have a resource that can be in certain states. When transitioning into another state, a lot of things might happen (e-mails might be sent). The transition is triggered by the client. Should I handle this transition merely by letting the resource be updated via PUT? This feels a bit odd.
For the client this means that updating an attribute of this ressource might only change the attribute, or it also might do a lot of other things. So PUT =/= PUT, kind of.
And implementation wise, I have to check what exacty the PUT request changed, and according to that trigger the side effects. So there would be a lot of checks like if(old_attribute != new_attribute) {side_effects}
Is this how it's supposed to be?
BR,
Philipp
Since there are only resources and no services to call, how can I provide operations to the client that only do stuff and don't change any data?
HTTP is a document transport application. Send documents (ie: messages) that trigger the behaviors that you want.
In other words, you can think about the message you are sending as a description of a task, or as an entry being added to a task queue. "I'm creating a task resource that describes some work I want done."
Jim Webber covers this pretty well.
Another thing I'm not sure about is side effects: Let's say I have a resource that can be in certain states. When transitioning into another state, a lot of things might happen (e-mails might be sent). The transition is triggered by the client. Should I handle this transition merely by letting the resource be updated via PUT?
Maybe, but that's not your only choice -- you could handle the transition by having the client put some other resource (ie, a message describing the change to be made). That affords having a number of messages (commands) that describe very specific modifications to the domain entity.
In other words, you can work around PUT =/= PUT by putting more specific things.
(In HTTP, the semantics of PUT are effectively create or replace. Which is great for dumb documents, or CRUD, but need a bit of design help when applied to an entity with its own agency.)
And implementation wise, I have to check what exacty the PUT request changed, and according to that trigger the side effects.
Is this how it's supposed to be?
Sort of. Review Udi Dahan's talk on reliable messaging; it's not REST specific, but it may help clarify the separation of responsibilities here.

where should I validate data on javaee?

I did a search on the board and there were some threads related to what I will ask but the other questions were not exactly like my situation.
I want to implement a service (ejbs) and different clients (rest api, webservice, jsf managed beans and maybe some other client) are going to use this service. My question is: in this scenario, where should data validation occur?
Seems reasonable to me to do it inside my business control (ejbs),- since I don't want to implement one validator type for each client- but I don't see people doing it...
best regards,
Oliver
The general advice would be: Every component, which exposes functionality to the outside should validate the input it receives. It should not hope for the best that it will in all cases receive valid input. Additionally, as you said, it keeps the validation at one place.
On the other hand it may be a reasonable decision when you have both sides under your control to decide for an early validation on the client and document the expected/required valid input data.
You have a similar problem when designing a relational database structure - you can have all sorts of constraints to ensure valid input data or you can check validity in the component storing the data in the database.
And, not to forget, whenever you validate in a deeper layer, all higher layers have to handle the exceptions or error messages when validation fails.
Regarding your specific question, the usage of the same service from different clients advises to validate within the service.

Can you help clarify some points regarding RESTful services and Code Generation?

I've been struggling with understanding a few points I keep reading regarding RESTful services. I'm hoping someone can help clarify.
1a) There seems to be a general aversion to generated code when talking about RESTful services.
1b) The argument that if you use a WADL to generate a client for a RESTful service, when the service changes - so does your client code.
Why I don't get it: Whether you are referencing a WADL and using generated code or you have manually extracted data from a RESTful response and mapped them to your UI (or whatever you're doing with them) if something changes in the underlying service it seems just as likely that the code will break in both cases. For instance, if the data returned changes from FirstName and LastName to FullName, in both instances you will have to update your code to grab the new field and perhaps handle it differently.
2) The argument that RESTful services don't need a WADL because the return types should be well-known MIME types and you should already know how to handle them.
Why I don't get it: Is the expectation that for every "type" of data a service returns there will be a unique MIME type in existence? If this is the case, does that mean the consumer of the RESTful services is expected to read the RFC to determine the structure of the returned data, how to use each field, etc.?
I've done a lot of reading to try to figure this out for myself so I hope someone can provide concrete examples and real-world scenarios.
REST can be very subtle. I've also done lots of reading on it and every once in a while I went back and read Chapter 5 of Fielding's dissertation, each time finding more insight. It was as clear as mud the first time (all though some things made sense) but only got better once I tried to apply the principles and used the building blocks.
So, based on my current understanding let's give it a go:
Why do RESTafarians not like code generation?
The short answer: If you make use of hypermedia (+links) There is no need.
Context: Explicitly defining a contract (WADL) between client and server does not reduce coupling enough: If you change the server the client breaks and you need to regenerate the code. (IMHO even automating it is just a patch to the underlying coupling issue).
REST helps you to decouple on different levels. Hypermedia discoverability is one of the goods ones to start with. See also the related concept HATEOAS
We let the client “discover” what can be done from the resource we are operating on instead of defining a contract before. We load the resource, check for “named links” and then follow those links or fill in forms (or links to forms) to update the resource. The server acts as a guide to the client via the options it proposes based on state. (Think business process / workflow / behavior). If we use a contract we need to know this "out of band" information and update the contract on change.
If we use hypermedia with links there is no need to have “separate contract”. Everything is included within the hypermedia – why design a separate document? Even URI templates are out of band information but if kept simple can work like Amazon S3.
Yes, we still need a common ground to stand on when transferring representations (hypermedia), so we define your own media types or use widely accepted ones such as Atom or Micro-formats. Thus, with the constraints of basic building blocks (link + forms + data - hypermedia) we reduce coupling by keeping out of band information to a minimum.
As first it seems that going for hypermedia does not change the impact of change :) : But, there are subtle differences. For one, if I have a WADL I need to update another doc and deploy/distribute. Using pure hypermedia there is no impact since it's embedded. (Imagine changes rippling through a complex interweave of systems). As per your example having FirstName + LastName and adding FullName does not really impact the clients, but removing First+Last and replacing with FullName does even in hypermedia.
As a side note: The REST uniform interface (verb constraints - GET, PUT, POST, DELETE + other verbs) decouples implementation from services.
Maybe I'm totally wrong but another possibility might be a “psychological kick back” to code generation: WADL makes one think of the WSDL(contract) part in “traditional web services (WSDL+SOAP)” / RPC which goes against REST. In REST state is transferred via hypermedia and not RPC which are method calls to update state on the server.
Disclaimer: I've not completed the referenced article in detail but I does give some great points.
I have worked on API projects for quite a while.
To answer your first question.
Yes, If the services return values change (Ex: First name and Last name becomes Full Name) your code might break. You will no longer get the first name and last name.
You have to understand that WADL is a Agreement. If it has to change, then the client needs to be notified. To avoid breaking the client code, we release a new version of the API.
The version 1.0 will have First Name and last name without breaking your code. We will release 1.1 version which will have the change to Full name.
So the answer in short, WADL is there to stay. As long as you use that version of the API. Your code will not break. If you want to get full name, then you have to move to the new versions. With lot of code generation plugins in the technology market, generating the code should not be a issue.
To answer your next question of why not WADL and how you get to know the mime types.
WADL is for code generation and serves as a contract. With that you can use JAXB or any mapping framework to convert the JSON string to generated bean objects.
If not WADL, you don't need to inspect every element to determine the type. You can easily do this.
var obj =
jQuery.parseJSON('{"name":"John"}');
alert( obj.name === "John" );
Let me know, If you have any questions.

In which layer shall i18n/multi-lingual be handled?

The project that we worked on consists of 3 tiers: the presentation tier, the business logic tier and data tier, I will call them here the front, mid and back.
The front is written in PHP and it communicates with the mid via web service (XML-RPC, SOAP, etc.). Users can also write their own clients to talk to the mid. The nid is developed in Java, it performs business logic and provides data to the front, it may also throws exception to the front.
The question I am having is, if I want to have multi-lingual support in future, where shall I develop i18n? It makes sense to be at the front because of all the texts that it has, what about exception and other messages coming from the mid?
If a user develops their own client and the mid has multi-lingual support, the messages coming from it (like exception as said above) can therefore be in their selected language. That's the advantage I'm seeing. I just don't like the idea of having two layers with i18n code and having to handle i18n when I am handling an exception.
It depends a lot on your application.
If you think UI localization, the presentations is definitely affected.
I would say that the middle tier should not generate any messages.
Exceptions are intended for developers, not for users. So in the presentation capture the exception, and present it to the user in a localized way saying something like "Fatal error 12313 occurred, please send this report to ..."
(maybe even nicer, you don't show the exception text at all, offer a "Send a crash report" button, with a "Show report" button for the user to see that you are not sending any private data).
But if you thing about stuff beyond UI, then the others might also be affected.
The business logic might be affected (for instance the way the tax systems work are different from country to country). And that is independent of the UI (Canada or Australia have another tax system than US, even if the UI is still English).
So you might want to design this layer very modular.
The content of the database might also be affected. Imagine you have products that are not available (or are banned) in certain countries. So you might need extra fields (or tables) to carry that info.
So in the end the answer is "you have to think about i18n at every level!" and ask yourself "what if"
I would ask you a question: The i18n data would be handled in the back layer (data tier)?
If you say yes then you got it, but if you say no then I would put it in the mid layer (busieness tier) because medium and larger projects use to interact with I18N (exceptions, currencies, message formats, time zones, charsets, etc...)
I would put it in the front layer (presentation tier) for smaller projects.
Regards.
If you want to be completely internationalized, exceptions and other messages from the middle-tier should not include text. You should specify a code that the client must look up in a table to understand.

Ajax requests, through MVC Framework (e.g. ColdBox) or not?

Do you fire ajax requests through the MVC framework of choice, or directly to the CFC?
I'm leaning towards bypassing the MVC, since I need no 'View' from the ajax request.
What are the pro's of routing ajax calls through MVC framework, like Coldbox?
update: found this page http://ortus.svnrepository.com/coldbox/trac.cgi/wiki/cbAjaxHints but I am still trying the wrap my mind around what benefits it brings over the complexity it introduces...
Henry, I make my Ajax requests to proxy objects of my model. Typically, I am outside of a 'framework' when doing so. That being said, it may be (very) necessary to utilize your framework, such as working within a set security model.
I can't really see any benefit of bypassing the MVC framework - in combination, those three elements are your application.
Your ajax elements are really part of the view. As Luca says, the view outputs the results of the model and controller.
Look at it this way - if you made an iPhone-friendly web interface (that is, a new View), would you bypass the model and controller?
Luis Majano, creator of ColdBox said:
These are the two schools of ajax
interaction henry.
I prefer the proxy approach because it
adds the following:
Debugging
Tracing in the debugger
AOP interception points
Security
Setting availability
The proxy will relay to the event model, so I can use local interception
points, local AOP, plugins, etc.
In other words, it can be a highly
monitored call instead of a simple
service cfc call, which you can still
do.
I, for one, love to have my execution
profiler running (part of the coldbox
debugger), so I can see when ajax
requests come in and when they come
out. I can see the data requested and
the data sent back. I don't have to
look in log files, or try to imagine
results or problems. It really helps
out in debugging.
However, it would be a developer
choice in which way you decide to go.
My personal preference is to always
use my proxy to event delegation
because it gives me much more
flexibility, debugging and peace of
mind.
The purpose of the "view" in MVC frameworks is to show the data after the "model" and "controller" have generated it. If you don't need the "view", then what's the point of using such a design pattern?
I agree with Luca. It also bypasses any kind of sanitization and filtering logic you have in your MC stack. It basically negates any kind of query processing that you may or may not have in place.
Yeah, I wouldn't bypass your framework, figure out what's causing you grief and hunt down the offending pieces, adding logic to exclude common components such as headers or footers, and looking for methods injecting whitespace that while fine for html is annoying or down right problematic when parsing json.
Adding output="false" especially in your application.cfc and it's methods would be the first thing I cleaned up.
I am a strong believer in NEVER directly accessing the CFC's directly, I find it creates long term problems when a major refactor might want to consolidate or eliminate components, the direct accesses potentially make this harder than it should be, especially if a third party is hitting your ajax from another domain(e.g. flash remoting).
+1 to Steve's answer.