Last week I was talking about the 3 tier architecture with my seniors. I was saying that it has a UI tier, Business Logic Tier and Data Access tier. After I have finished, he just told me that, I am talking about 3 layered architecture, not a 3 tier architecture. Then I asked him what is the difference, he assigned me the task to make a documentation about the difference. so Here I am, Os far, I come to point that
a 3 tier architecture is
1. A client in on machine,
2. The application Server is hosted in one machine
3. The database server is hosted in another machine
where 3 layer architecture(UI, BLL abd DAL) can work on same machine.
My question to you, Am I correct? What is the difference according to your knowledge? Can anyone please explain?
Your explanation is right: a n-tier architecture is a physical structuring mechanism, while a n-layer architecture is a logical structuring mechanism.
While is true, for example, that a 3-tier application is (at least) a 3-layer application, a 3-layer application could have only 1 or 2 tier(s).
You can also look at these articles:
http://davidhayden.com/blog/dave/archive/2005/07/22/2401.aspx
http://en.wikipedia.org/wiki/Multitier_architecture
from wikipedia:
In software engineering, multi-tier architecture (often referred to as n-tier architecture) is a client–server architecture in which the presentation, the application processing, and the data management are logically separate processes
Tiers vs Layers is both a software and hardware related difference. There's a client-server divide or a logical layering. The boundaries for either concept depend on the responsibilities of each conceptual component of the architecture. For the most wellknown example of layering, see the OSI model.
Layers are conceptual entities, and are used to separate the functionality of software system from a logical point of view; when you implement the system you organize these layers using different methods; in this condition we refer to them not as layers but as tiers.
Related
An IT architecture composed of software that has been exposed as “Services” – i.e. invoked on-demand using a standard communication protocol. So, loose coupling on how to use SOA, give a good example.
There are three major types or methods or approaches that have been emerging for club information, disparate and systems in a business. As different service providers and businesses race towards providing solutions to customers and consumers, these approaches help to meet the requirements for coarse-grained, loosely clubbed and asynchronous services.
1. The Enterprise Service Bus
The first approach that helps to build and implement an optimal SOA is the enterprise service bus or ESB. This approach helps to coordinate and arrange the different elements that are in the form of distributed services on a network. This approach considers the systems to be discrete and distributed services that connect to one another through message oriented infrastructure that is asynchronous. This kind of a message-oriented infrastructure makes it possible to have loosely coupled connections between independent services or modules.
2. Business Process Management
Many companies, for many years now, have tried to solve business process problems by the implementation of Business Process Management approach. This approach takes into consideration the IT assets and systems as activities or tasks that participate in well synchronized and well-orchestrated business procedures. BPM tools are mainly used at the time of modeling and designing procedures rather than using them to construct processes that can reach integration objectives. This is the main challenge of BPM. By BPM solutions on their own are enough to meet SOA requirements because they do not consist of the runtime environment that is needed for loosely coupled modules.
3. Service Oriented Integration
The third and the last approach to proper implementation of SOA is the service-oriented integration approach. This particular approach makes use of the architectural guiding rules or principles to build an environment or ecosystem of services that businesses can combine dynamically and create superior level processes that can meet ever changing and evolving requirements. This approach moves past tightly coupled and brittle modules by creating a distinction between the consumer and producer of a service. It thus imposes the aspect of loose coupling that is needed to implement SOA properly to meet business requirements. Even this approach by itself isn’t sufficient to guarantee long time running interactions between services.
I have an application which is packaged as a single ear file deployed on WebSphere. Inside the package, the code is organized in to UI files, Business Logic files and Database related files. Now, is this a monolithic application or a 3-tier architecture?
What is the difference?
You are comparing wrong things. Monolithic application need to be compared against Micro Services. In monolithic application; you deploy all the features/api end-points in a single EAR/WAR file; i.e. single JVM. In micro-services they are deployed in multiple JVMs. Note that in Monolithic architecture also you have multiple REST end points exposed.
3 tier, or 2 tier or N tier architectures is a different concept. It says how many subsystems/modules your application is divided like database layer, client layer, application logic layer. Hence, monolithic as well microservices both can be n tier applications.
Remember that a EAR and what is in it is a packaging choice. Your same application can be deployed in multiple ears in multiple Java EE containers on one or more servers. The EJB-jars and WARs are intended to do that. With Java EE you choose how to distribute you application across the containers and nodes based on what make sense.
Technically a tiered application is one where the layers can be independently deployed, distributed and accessed. I.e. my business logic can be on 5 servers in 9 ejb containers and accessed by 3 user interfaces that could be desktop, mobile, web etc. And possibly parts of different applications.
The more traditional definition of a monolith was an application that wasn't tier. Specifically that its parts cannot be composed in to other applications at runtime
I am taking over a project to replace an ancient legacy system from the ground up. Before I came on, the company hired a consultant who put together a basic sketch of the system and pushed SOA heavily. This resulted in a long list of "entity services", with the intention of them being composed into more complex service combinations. For instance, a user wanting committee info would hit the "Committee" service, which then calls the "Person" service to get its members, and the "Meeting" service to get its meetings, and so on.
I understand the flexibility gains in this, but my concerns are about performance. It seems to me that a system built with such a fine level of granularity to its services spends too many resources on translating service messages, and the performance will be unacceptable. It also seems to me that the flexibility gains can still be made using basic reusable objects, although in that case the benefit of a technology-agnostic interface is lost to gain performance.
For more background: The organization requesting this software does not currently have a stable of third-party software suites that need integrating with. This software will replace all suites. There are also currently no outside consumers who need to access the data outside of the provided website interface -- all service calls will be from other pieces inside our system. The choice of SOA in this case seems entirely based on the concept of "preparation".
So my question -- what level of granularity is acceptable in a stable of services without sacrificing performance? Am I being too skeptical of the performance hits we'll take implementing all our entities as services? Should functionality be available as web services only when they are needed, with the "preparation" focus instead going into designing the business layer for the probability of services later being dropped on top of it?
First off, finding the "sweet spot" in the number of services is difficult for sure. Too many services, and your integration costs suffer, too few, and your implementation costs suffer. You have to find a good balance.
My advice to you is to follow Juval Lowy's methodology in that you should break down your services by areas of volatility, or areas of change. This will give you your granularity level. You should also read his WCF book if you can.
As for the performance, WCF will inherently support many thousands of calls per second depending on your use cases and hardware. Services calling services is not a problem. The platform will support it if you do your part. For example, use the right binding for the right scenario (named pipes to call services on the same machine and TCP to call services across machine where possible). You should also implement a vertical slice of the application and do performance testing before building the rest of the application. This will verify your architecture.
When I say "Service", I mean the complete vertical component that can perform a complete independent operation. And I don't prefer going in more granularity unless there is exceptional requirement. In my view of SOA, A service should perform the meaningful business function that can be independently performed. A service should not require another service to complete its function.
What level of granularity is acceptable in a stable of services without sacrificing performance?
Individual entities. As described by the consultant.
Am I being too skeptical of the performance hits we'll take implementing all our entities as services?
Yes. Way too skeptical.
A decent framework can optimize some of these requests so that they don't involve a lot of network overheads.
As with SQL databases, the problems are largely solved. You'll find that the underlying applications that you're presenting as services are the bottlenecks. The SOA layer is largely plumbing. The bits still need to move through the pipes, the SOA layer just organizes them more intelligently than most of the alternatives.
Should services only be implemented when they are needed, with the "preparation" focus instead going into designing the business layer for the probability of services later being dropped on top of it?
Yes.
That's what "Agile" means.
Find a user story. Build just the services (and entities) for that story.
You will have some significant overhead for the first few stories in getting your SOA framework all squared away and deployable as a simple, repeatable release step.
Never do extensive "preparation" for things you "may" need in some improbable future. Read up on Agile and how to prioritize a backlog.
Sorry for the very involved question, but this is something I've been researching for a while now and it is really frustrating me. I feel like in today's age we have a million and one ways to implement services tat are cross-platform (SOAP) and easy to build (thanks to .NET, java, and other frameworks). However, these technologies have been in the community for 5-10 years, but we are (or at least I am) constantly plagued with the same issues:
Identification (Tracking services) - UDDI; e.g., had to remind a co-worker the 3 times this month where a service is at, despite the fact there is a wiki that discusses the service and a PDF version of the same documentation that lives in a repository where we keep our service docs.
Scalability - Out of the box clustering; As organizations, we spend a lot of money on paying our admins just to watch the utilization of our services and make decisions like, does this service need more RAM, more CPU, more interfaces? How do I load balance this?
Monitoring - error logging, etc; I can't count how many times I have to set up tracing on services in order to see why a bug is happening that only seems to affect one customer, or have to code logic into the service to serialize exceptions, log exceptions to dbs, fail gracefully, etc.
Deployment - easy to deploy; none of this deploying DLLs to 5 load balanced servers
Each one of these problems requires some type of custom solution implemented by the organization. Documentation and UDDIs for #1. Virtualization and load balancing hardware / software for #2. Tracing, writing exceptions to databases / logs, etc for #3. Custom deployment software for #4. I work for a mid-sized organization. I can't even imagine how a company the size of Sun, Google, or Microsoft would tackle these dilemmas.
Maybe my vision is unrealistic, but I dream of having a Framework per se that lives on top of a server cluster that manages all of the above. I was ecstatic to read about Microsoft's AppFabric since it really seems to extend some of the functionality of BizTalk to WCF service implementors: Caching, Hosting, Monitoring, etc. However, from what I've seen, I still don't feel it lives up to my dream for an all-in-one solution that assists the developer and organization in writing services that are scaled across clusters easily, deployed into the cluster easily, and identifiable, possibly even version-able.
So, I don't mean this post to be about my dream. I do actually have a question. For starters, is my dream / want completely unrealistic? Furthermore, what solutions are there available that attempt to solve these problems without confining us to a new and more proprietary way (BizTalk) of developing services? An lastly, in concern to a complete SOA / ESB solution, where do we see the most potential in the market right now or in the future?
I think that you are talking about different kinds of problems here.
1). Developers who don't read documentation. This is an endemic problem, not limited to SOA - just look at some questions on StackOverflow. At least the developer is asking you whether there is a service, rather then just duplicating logic in their own code. I don't see any technical solution to these kinds of problems, you've already provided good registries and documentation, but some developers prefer to talk to people. Maybe, even, this is actually a good thing - human interaction has value above the technical content of the interaction. Or maybe, you're too nice: "No, I won't answer that question, look it up."
2). Scaling. There are technologies addressing this issue. (Disclaimer I work for IBM, who sell some, so I'll reference these - I'm not intending to imply that IBM are the only vendor with solutions in this space.) There are products such as this that can provision a new machine, install a software stack and add it to a cluster to address workload changes. Then at a finer grained level of control in the Java EE world the Application Server can dynamically shape traffic and adjust clusters. See WebSphere Virtual Enterprise
3). Monitoring. I don't "get" what you expect here. In all likelyhood such tricky bugs will require application level trace. For some problems such as finding memory leaks and performance bottlenecks there are very good tools, at least in the Java EE world.
4). I can't speak to the .Net world, but I'd say that Java EE app servers do a reasonable job of deploying the apps across clusters smoothly, and in the cases where we use JNI and need DLLs deploying then we can use products such as the Tivoli stack I mention to manage this.
So, in summary, I do think that vendors are trying to address these issues. And I don't think your life would be simpler without SOA. Imagine instead the same problems applied to myriad separate, independent applications.
Here's my two cents.
I've been a developer at a company that used SOA incorrectly. The worst solution they implemented was field level validation of form elements on a desktop app using SOA. To perform acceptably these require very low latency. A 2-4 second wait to change to a new field gets old fast. The service ran over the network on a biztalk server. Everyone hated it.
If you're going to do this you really need to spend a lot of time dealing with network latency, service failure, timing, and timeout issues.
Don't get carried away and think SOA is the solution to every problem. Used at a high level it's great, used at a low level it makes your applications fragile, slow, and impossible to debug.
If you talk to IBM or one of the big SOA vendors, they got a products that cover each scenario.
Identification (Tracking services) - UDDI; e.g., had to remind a co-worker the 3 times this month where a service is at, despite the fact there is a wiki that discusses the service and a PDF version of the same documentation that lives in a repository where we keep our service docs.
Registry and Repository server. Nice thing is that it does governance (promotion, demotion, versioning, approval) and your ESB typically does a "lookup" for the latest and greatest against the register server.
Scalability - Out of the box clustering; As organizations, we spend a lot of money on paying our admins just to watch the utilization of our services and make decisions like, does this service need more RAM, more CPU, more interfaces? How do I load balance this?
Transaction monitoring software like IBM Tivoli Composite Application Manager for SOA. Basically, it tracks things from a horizontal point of view and to see if there is a service disruption from a end user/end app point of view.
As far as your clustering.... you have to pick good middleware and architecture. Personally speaking, get stuff that is "cloud" ready. App Servers with NoSQL connected by MOM.
Monitoring - error logging, etc; I can't count how many times I have to set up tracing on services in order to see why a bug is happening that only seems to affect one customer, or have to code logic into the service to serialize exceptions, log exceptions to dbs, fail gracefully, etc.
Enterprise standards for your developers and for your vendors. Integration of all business and system events into a single dashboard. (Most companies spilt them). This is done already at most enterprise shops.
Deployment - easy to deploy; none of this deploying DLLs to 5 load balanced servers
Ahh.. Microsoft IIS Web Deployment Tool 2.0. You can sync 100s of MS servers by just updating the master. It's really easy.
Is N-Tier Architecture only the physical separation of code or there is something more to it?
What sorts of coding do we put in Presentation Layer, Application Layer, Business Logic Layer, User Interface Logic, Data Access Layer, Data Access Object,?
Can all the layers mentioned above give a fully functional N-tier architecture?
One case:
Whenever a user clicks a button to load a content via AJAX, they we do coding to fetch a particular HTML output and then update the element, so does this JavaScript coding also lie on a different tier? Because If N-tier Architecture is really about physical separation of code, than i think Its better to separate the JavaScript coding also.
Tiers and Layers are completely different things.
N-tier is less about code separation and more about scalability. The idea being that you can easily scale out different "tiers" as necessary. Most websites are 2 tier: web server and database server. Sometimes they go to 3, with a Service Oriented Architecture.
Layers is all about code separation and reusability. MVC is a good example of layered code.
Beyond all of that and before you go happily putting up barriers between different portions of your code, you really need to ask yourself two questions: What are you Separating and Why are you Concerned about it? If you can't give rock solid answers to both of those (particularly the second), then I'd say stop what your doing and back away from the keyboard. And by a rock solid answer, I don't mean "because bob said this is the way it should work."
Take the time to really understand what is going on, how both the code and your sanity will be impacted by putting up these barriers. If they help, great; if they don't, well..
The way I've heard the difference between layers and tiers described is that layers can be on different machines whereas tier may not (I have also heard the terms used in reverse). Thus, n-layer implies n-tier. The database server for example represents a separate layer but simply have a library of business logic that cannot be run on a machine separate from the presentation code is simply a separation of tiers. If the business logic is accessed through a service layer which could be hosted on a different machine than the presentation code, then it would, as the name implies, represent a separate layer.
N-tier would simply mean a separation of code into libraries that might still be required to run on the same physical machine. N-layer would represent a separation of code into services that probably run on separate machines even if it was possible to run them all on the same machine.
Addition
The Presentation tier would contain anything related to displaying the information to the user. In an MVC for example, that tier is subdivided into a controller which retrieves information from the business logic tier (model) and a view which displays the information itself. The data access tier would contain code used to retrieve information from the data layer. The business logic tier(s) would contain code the retrieves information from the data tier, processes that information into an object model that accounts for business rules and makes it available to tiers above it like the presentation tier.
The reasoning behind tiers is that they provide a degree of separation from the tier below. For example, suppose the data layer changed or more specifically, the database product changed. It would only require updating the data tier and not the entire code base.
Using the definitions I've provided, a standard database driven website for example with no calls to services really only has a presentation layer and a data layer (the database) even if it has multiple presentation tiers, middle tiers, and a data tiers. If you have service calls, depending on where they are called, they would represent another layer.