I'm trying to get more acquainted with rest and try to understand what layered system means in REST architecture. As far as i understand it means that if API has database it should be on different machine on different server and api call it when it is needed. The same with bussines logic, if call should be passed through some logic call is transfered to some other server and executed there. This also will help to solve performance issue if it exist. Am I right? please give any additional info
Well, I wouldn't think of a layered system as "each layer has to reside in a separate server'. It is more about the separation of concerns, i.e. every layer should have a single high-level purpose and deal only with that.
I will try to explain better with an example of what is wrong :
#GET
public String myService() {
return "<html><body><div>HELLO</div></body></html>";
}
Here you have the service and presentation layer all mixed up. Instead,
the service should just return "HELLO", while the client (which I assume here is a presentation layer) should be able to decide how to present the data.
One of the most common architectures is the so-called 3-tier architecture, where
you have data access, business logic and presentation. Services could be added as a separate layer, most commonly between business logic and presentation (so that you can apply the same business logic to different clients, e.g. web and mobile).
Related
I am trying to learn about RESTful services. In this tutorial I am watching, the instructor states the following:
REST services keeps things very defined between what is UI vs what is Services.
In general, what is the author implying here?
The services used within the UI are easy to spot vs. the rest of the UI?
Would the rest of the UI be CSS, HTML, and maybe some data stored in the local application?
Why does there need to be clear distinctions between the UI and services?
Do you know of an existing example of this I could take a look at to better visually understand?
It's probably impossible to explain exactly what a single sentence from a larger extract means without the context within which its embedded. But I'll have a go anyway. I suspect there was an element of hype involved - there's no guarantee that a Restful API is any more well-designed than a non-restful API and so there's really no guarantee that it will better enforce the separation between UI logic and business logic which we'd all love to see.
However, Rest's document centrism, its focus on statelessness, and its use of a uniform API do help in creating a clean layer between the UI (webapp or mobile app) and the server.
Other forms of service oriented architecture, such as RMI or SOAP, tend to be more focussed on providing a means of accessing a remote API as if it was local - in essence hiding the fact that a load of networking-stuff is happening to get that. The result is often a very fine-grained, although quite powerful, API which requires complex logic (business logic) embedded in the client application to use properly. These protocols can be quite chatty and network-inefficient because the focus is rarely on that data travelling over the network.
Restful APIs, which are centered around documents, tend to push UI designers in the direction of editing those documents - focussing the UIs on presentation and leaving logic either to the user or the backend service.
The uniform interface of Rest helps focus the API on working on documents - every resource is accessed in the same way, there's little leeway to add a custom response code which can be 'interpreted' in some way by the client. HTTP is a good protocol on which to build a Restful API for this reason - the major verbs are GET, PUT, POST and DELETE.
Similarly, the statelessness of Rest pushes the UI to focus on the data it has and how it should be, rather than on providing any kind of intermediate translation or caching layer to the user. The UI doesn't have any more information than the documents it has to present to the user - nice and simple.
The real core of your question, I guess, is "why should it be like that"? And the answer to that is that it keeps things simple and flexible. Presentation logic (e.g. what language or timezone or number format does the user care about) should not be mixed up with business logic (how many 'foo' widgets has the user bought in the past, do they really want a 'bar' widget now, because lots of people who bought 'foo' widgets want 'bar' ones too). Those two types of things have very different reasons to change and different types of people who are good at working with the underlying code.
we are planning a REST api for our application and are trying to make
a decision on if we should implement separate controllers for the REST
functionality or not.
We could use the withFormat{} in our current controllers but
separating the REST functionality in different controllers feels
somewhat cleaner..
In that way we can build our API seperate from the current controllers
and we could even take the REST controllers into another application
etc.
Any thoughts on this subject? Any real world experience in what the
best practice would be?
We recently faced the same decision and we decided to go for separate controllers for the REST API.
Advantages of separate controllers include cleaner/clearer controller actions and the possibility to support different versions of the REST API later on.
We also would like to keep the option to host the REST API on separate server instances open. These servers would use exactly the same .war, but with different configuration settings for the feature toggles.
A disadvantage of separate controllers might be the DRYness of the controller code. Although this should be limited, since IMHO you should keep the controllers as thin as possible and extract shared logic to Grails services or helper classes.
I will work with grails soon, but so far i have little experience with it. But in web apps i worked, we always left webservices separated from the controller code. We also separated REST from SOAP. Common methods for them would be in service layer. It, indeed, felt cleaner. We didn't had to insert a lot of ifs in the methods
I would, for a given resource, use one controller that interfaces with a service layer based on context (the media type received or requested -- SOAP, JSON, XML, etc.) This way, you have a truly uniform resource identifier that can accept and return various media types and the controller won't need to know anything but what method the user wants to perform on what resource and what media type is involved.
For instance, maybe the service layer returns objects that have methods such as 'toXml', 'toSoap', or 'toJson'. Then you can just ask the service layer to do whatever and use a switch statement on the requested media type to either return the requested information, or by default throw a 406 Not Acceptable status code. For unsafe or idempotent transactions, the object may have constructor or factory methods for a given media type and then you just ask the service layer to do whatever with that object.
I’ve been trying to wrap my head around how to expose my domain objects to the client. Whether I’m using a rich client or I’m using the web, I want to use the MVP and repository patterns.
What I’m trying to wrap my head around is how I expose my repository and model, which will be on the server. Is it even possible to expose complex business objects that have state via a web service, or will I have to use a proprietary technology that is not language/platform agnostic, like .Net remoting, EJB, COM+, DCOM, etc?
Some other constraints are that I don’t want to have to keep loading the complex domain object from the database or passing it all over the wire every time I want to do an operation. Some complex logic might be that certain areas of the screen might be disabled or invisible based on the users permissions in combination with the state of the object. Validation and error message information will also need to be displayed to the user. I want to be able to logically call a lot of my domain object operations as if it were running on the same machine.
With the web, you have free rein. You don’t have to expose your objects across service boundaries, so you can make them a rich as you would like. I’m trying to create an N-teir architecture that is rich and works when the client calling the model is on a different machine.
You can expose your domain objects like any other object through REST or web services. I think key is to understand that you will have to expose services that provide business value in a single call, and these do not necessarily map 1:1 to your repositories. So while you on the server may expect a single service call to use multiple repositories and perform various aggregations, the things you expose over any kind of web-service should be more or less complete results. The operations you expose on the service should not expose individual repositories but rather focus on meaningful operations that provide a given business value.
I hope this helps somewhat.
You can use a SOAP formater for .Net remoting,
but the resulting service will probably be hard
to consume as a service, and it will surly be very chatty.
If you want your domain model to be consumed as a service,it should be designed as a service.
As stated in domain driven design, a service is stateless, so it won't expose your objects directly. Your service should expose methods that provides meaningful business operations that will be executed as a single unit.
Usually consider that the model in your client is in a different bounded context because its concerns will be a bit different from the one on the server.
What I’m trying to wrap my head around
is how I expose my repository and
model, which will be on the server. Is
it even possible to expose complex
business objects that have state via a
web service, or will I have to use a
proprietary technology that is not
language/platform agnostic, like .Net
remoting, EJB, COM+, DCOM, etc?
A good domain model is going to be highly behavioral and designed around the problem domain (and your discussions with domain experts), I'd thus argue against designing it to be exposed to remote consumers (in the same way that designing it from the database or GUI first is a bad idea).
Instead I'd look at using a style like REST or messaging and decide on the interface you want to expose and then map to/from the domain. So if you went with REST you'd design your resources and API (URL's, representations, etc.) and then you'd need to fulfill it from the domain model.
If this becomes un-natural then you can always have multiple models, for example mapping a seperate read-only presentation specific model to the same data-source (or which wraps the complex behavioral domain model) is an approach I've used several times.
Some other constraints are that I
don’t want to have to keep loading the
complex domain object from the
database or passing it all over the
wire every time I want to do an
operation
Look at caching in HTTP and supporting multiple representations for a resource, also look at caching within your data-access solution.
Validation and error message
information will also need to be
displayed to the user. I want to be
able to logically call a lot of my
domain object operations as if it were
running on the same machine.
You can either represent this as a resource or more likely look at HTTP status codes and the response bodies you'd want to use in those situations.
We are developing a web application. We want to possibly reuse the work we do here for a different application that will use the same database, and use the same business rules for reading and writing to said database.
Which design would be more correct
Having the UI call web services, which would use business objects containing the business logic, which would talk to the data access layer.
Have the UI use business objects containing the business logic, which would call web services, which would then talk to the data access layer.
Have the UI user business objects containing the business logic, which would talk to the data access layer.
Don't mix logical design with physical design. Logical design operates over layers and physical design - tiers. Web Service is not a layer. It is simply a tier.
In logical design there is standard approach: UI layer-> BL layer -> DAL
In physical design all layers can reside within one client-side application connecting local database, or can be distributed over the remote tiers. But for distributed applications usually is added one more layer: Application layer, which hides from BL layer communication over the wire.
I would say the 3rd one. I tend to think of web services as another presentation layer.
Think of it this way: you have a web UI, which calls your business layer code to do things like create a new user (User.Add), find all products that match a given description (Products.FindByDescription), etc.
You can now re-use that same business layer code to build a set of public-facing web services for 3rd parties to make use of. There can be a method which adds a user - that calls your internal User.Add() method, another one to find products, etc..
What you get is a parallel set of presentations/interfaces to the same underlying data and business logic.
Behind the scenes (totally out of the scope of web services or UI layers), the business layer calls a data access layer that takes care of physically querying the database. If you were to change to a different DBMS, you should ideally (and in theory) be able to rebuild the data layer for the new database and have everything simply work.
Your business layer contains the rules like a username has to be 4 to 15 characters long; users are only allowed to search for and load products that are at a store they have access to; etc.
If you decide to change a business rule - like a user is allowed to search for products in any store in their state - then you change it in once place, and don't have to touch the web service or UI to make it work.
From your description, you haven't provided a reason why you would need the use of a web service layer. Assuming your database is reachable by your UI system, i.e. within the same network behind your firewall, a basic business-object layer that your website UI code (server-side, I'm assuming) will employ meets your requirements.
Bring in a web service tier when the distance between your UI system and your data layer starts to cross boundaries that a Data access layer or Business logic layer would begin to encounter difficulties.
In terms of the design being "correct" or not, it's not really possible to give a 100% answer to the correctness of a design without the full context. What are the requirements (functional and non-functional)? What design goals do you want to fulfill? How important is each goal?
The only goal your question mentions is that you want to reuse the business logic with another application. When I want to reuse the business logic of an application in a standard way I choose web services. So based solely on your one requirement I would say that option 1 ( UI->Web Service->Business Layer->Data Layer ) is a good choice.
Logically, web-services belong in the UI layer. Think of "User" being not only a human but another system and it becomes clear. Maintaining strict separation of concerns between these logical layers will allow you to easily implement and maintain your application.
Check Out: http://www.icemanind.com/layergen.aspx
The way it should go is, you have your UI layer on top, your data layer on the bottom and your business layer in between the two. Each layer can only communicate with the layer below it. So the UI talks to the business layer only...the business layer talks to the data layer only. Your UI should never talk with the data layer and your data layer should never interact with your UI.
Unless you have a reason to use a web service, then I wouldn't.
Do you hear anything about Service layer ? I think you can use a service layer for your transactions and operations and using a facade layer helps you to isolate and manage accessing from UI to data access layer directly or indirectly after visiting the Business layer . it depends on your requirements.
We have a website, where transactions are entered in and put through a workflow. We are going to follow the standard BLL(Business Logic Layer), DTO(Data Transfer Object), DAL(Data Access Layer) etc. for a tiered application. We have the need to separate everything out because some transactions will cross multiple applications with different business logic.
We also have a backend processor. It handles our transactions once the workflow has been completed. It works with various third party systems, some of which are unstable, or the interface to them is unstable, and then reports the status of the transaction. Each website will have its own version of the backend processor.
Now the question, with N-Tier, they suggest a new BLL for each application. With the layout of the application above, it can be argued that the backend processor and website is one application acting in unison, or two applications with different business logic. What would be the ideal way to handle this? Have it act like one system, or two?
One thing that I picked up on while learning MVC over the last couple years is the difference between what I call application logic and domain logic. I don't like the term business logic anymore, because it has too much baggage from all the conflicting theories and practices that have used that term too loosely.
Domain logic is the "traditional" business logic, how things are supposed to act, what they require (validation), etc. Application logic is anything that is specific to a given presentation of your domain, IE when the user clicks this submit button in your web app then they are directed to this web page over here (note that this has nothing to do with how a WinForms app or a background processor would work). Application logic should live in your application. Domain logic should live in your BLL and lower, and be reusable across the different applications that may use your common "business logic".
Kind of a general answer, but I hope that helps.
You might consider partitioning the functionality to reflect the organization of the stakeholders. Usually if you have two distinct organizational groups, then development and administration requirements are easier to manage if the functionality is similarly partioned. And vise versa.
Most of us don't spend that much time writing applications that explore the outer boundaries of hardware and software capabilities.
If you separate your concerns well then I think that you will be able to view them as the same application with a single business logic layer, there is no point writing the same code twice. The trick will be forcing the separation of concerns between the user interface portions of the website and the business logic in your BLL library.
Performance is going to be an issue as well, you have to ensure that your batch processing doesn't block your website from performing tasks that it needs to perform due to your resources. This may be an argument to keep them more separate, however as they're likely sharing a database anyway (or some other file based resource) then that may be an issue regardless.
I would keep a common business logic library programmed to interfaces and fully separated from your other concerns.
The "Ideal" way to do this depends on the project at hand and the various requirements of the system.
My default design is to have it act as one app. But if there are more heavyweight processes taking place, I like to create a batching process where the parameters of the requested job are stored and acted upon by a seperate process.