Practical Usage of N-Tier Architecture [closed] - n-tier-architecture

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm a .NET web developer for a small organization. We have some skilled developers here, but what we don't have is anyone who's worked for larger, more organized, software shops. We do all right, but I find myself wanting to structure my code better with little place to turn for advice.
It comes to this. At some point someone in our organization decided we were going to use webservices whenever we had to do any data access at all no matter the case. Thus, our hardware architecture is organized so that is the only way we can access our databases. This sounds fine in theory, but the problem is most of our apps turn out like this:
Spaghetti Mess Of Code In The aspx.cs -> Web Service That Does Nothing But Call a Stored Procedure
Beyond that there's not much separation. Whenever I start trying to research better structural practices I wind up reading about things like dependency injections, dirty properties, and class factories, my head starts to swim, and I move on to something else in frustration.
Here's a basic example of my wonderings. So let's say I have to make a page to select employees from a list, edit them, and update the database. Is it better to have the web service return an Employee object on a get, and accept an Employee object on an update? Or is it better to have the Employee object call the webservice to self populate?
In other words: Employee emp = svc.GetEmployee(42); vs Employee emp = new Employee(42);
The second example seems like it would be better organization for updates (update the relevant properties and call emp.Update()), but the problem is what if I need to get a list of Employees? It would be inconsistent to do Employee emp = new Employee(id); for a singular employee, but do svc.GetAllEmployees() for a list.
I feel like I'm rambling a bit so I'm going to cease trying to explain and hope someone understands my confusion. I appreciate any advice that anyone can offer. Thanks!

As with anything, there are a number of different approaches you can take. (So hopefully there will be a number of good and different answers here, because this is definitely an important question.)
One question you should probably ask yourself about your design is "how much logic will need to be shared between applications?" Going with the small GetEmployee example you gave, it sounds like you want to know where to put the models in your domain. Are these models used by multiple applications? Is business logic shared across applications?
If so then you may want your domain models behind the web service. Maybe build up a rich domain behind those services with its data access and external dependencies (remember that dependency injection thing, the best design decisions will need to be in the domain behind the service layer since that's the core of the whole system).
Then, of course, how do you access this logic? Again, there are a lot of options. My personal favorite design is to have a kind of request/response system that abstracts the service layer. Something as cool as NServiceBus for a really disconnected asynchronous system, something as simple as Agatha for just abstracting out the actual service and putting the request/response logic in code, or maybe play around with ServiceStack (something I've been meaning to do) or another project, etc. Hell, you could just roll a plain old WCF or even SOAP service layer and nothing more. It's up to you.
At that point you're looking at a fairly procedural system at the service layer. This isn't a terrible thing. Think of the service layer like an MVC site. You send it a request, populated with some kind of incoming viewmodel, it does its domain stuff in all its object-oriented goodness, and returns a view in the form of some XML representation of an outgoing viewmodel. Now you have a repeating pattern. Your client-side applications are just great big views for your domain. The dumber they are, the more interchangeable and replaceable they are, the better.
This allows you to encapsulate various "business actions" in a unit of work at the service boundary. Given a request from a client application, either the whole thing succeeds or the whole thing fails. Wrap it up in good error handling and an application-level error/exception logger to give you all the details of the failed requests. (Imagine that every request can be serialized to a string and included in an error message. Now you have everything you need to recreate the error in a simple string, as opposed to asking users "what did you click on?" to try to recreate errors.)
If, instead, the back-end doesn't really share anything with different applications and each application is its own distinct entity entirely. At that point you don't really need to share all that logic behind the service layer, and it's entirely possible that you shouldn't try to make any kind of overlap. Is the data access the only thing that's behind the service? What about things like filesystem access or external web service access?
If the only thing behind the service is the data access, then you can keep your models and data access repositories in your client applications like you seem to be accustomed to and just swap out your repository implementations with implementations that internally reference and access the service layer. (This would be the second option in your GetEmployee example.) Properly abstracted, direct access vs. service access repositories can be swapped out trivially depending on where the application needs to live.
Of course, this leans a little towards a true persistence-ignorance approach, which can be dangerous. Performance implications need to be considered. Some piece of logic or unit of work on the back-end may hit the database several times to do several things. If this is happening across a service then that adds service overhead to each database call. So you'll want to address this on a case-by-case basis.
I guess I may be rambling at this point, so to get back to something concrete it really comes down asking yourself some questions about your domain. How persistence-ignorant can you afford to be? Do your applications share business domain logic? Do you need to access other non-database external dependencies behind the service only? There's no universal design that's always the right answer. You'll probably end up experimenting with various designs and homogenizing on a design that's right for your developers and your environment.

Related

Would this be a bad use case of Apache Camel? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
We are implementing a Web Service using Apache Camel that has many (20-50) "direct:" routes calling Java methods. Every method basically has a route to it, whether it's for business rule processing, or DAO access methods. All the routes use from("direct:").to("direct"), but never to any other component.
While this may seem like it decouples the system from the standard Controller->bo->dao layers, it adds unnecessary book keeping of the Camel routes.
A better alternative would just simply be to define Java interfaces for the Business Objects and Dao layers, with an additional interface for any other service (external to the system, like file://, or http://) requests that would be a dependency inside the Business Objects or Controllers. The implementation to this additional interface would be using Apache Camel to talk to those external services.
As a side note, I'm thinking about how to convince my current colleagues to see my point.
Thoughts?
tldr;
Should Apache Camel be used where there is only 1 or 2 applications present?
I have used applications where there have been 20 systems involved in various complexity, protocols and patterns. I know other places where 50+ systems are involved. The only limit is on your design, performance etc.
Apache Camel is a middleware framework. Essentially your business logic should not know about how the data got to it or where it is to be delivered, only what it should deliver. Camel should take care of the rest.
By the way, does your middleware not talk to the external world? Why only use the direct and not other components?
You can also hide the middleware by using bean integration. That gives you an even more decoupling. See here: http://camel.apache.org/bean-integration.html
It really depends on what it is you want to accomplish and what your requirements are.
Well, for your specific use case I would say that you would probably have been better off with just a regular Java implementation with Camel being used as a glue between your system and the outside world. Like Souciance explains in his answer, Camel really shines when you have to integrate with multiple systems so you should at least keep it for communicating with the outside-world.
However, having already implemented the system's internals using Camel, I would have to say that it doesn't make much sense to put additional effort into replacing it with pure Java, especially since the current implementation gives you the ability to make the system more robust, for instance by using a high-performance MQ as a replacement for the direct routes, which would help your system become more resistant to failures and more easily decoupled later on, not to mention that having routes around your DAO objects makes it much easier to implement batching of DB updates when your system's load grows.

static and dynamic evolution of services

I am reading about challenges of concurrent and networked software in pattern oriented software architecure vol 2.
Service access often involves invoking remote operations on resuable
components like OMG event service, etc. Supporting the static and
dynamic evolution of services and applications is antoher key
challenge in networked software system.
Evoution can occur in following way
Interfaces to and connectivity between component service roles can
change, often at run-time, and new service roles can be implemented
and installed into and installed into existing components.
It is even more challenging to determine how to access services that
are configured into a system 'on-demand' and whose implementations are
unknown when the system was designed origanally. Here design challenge
are two-fold.
First, an applicatoin must export new services, even though it may not know their detailed interfaces.
Second, an applicaiton must integrate these services into its own control flow and processing sequence transparently and robustly, even
at run-time.
I need your help in understanding above text by answering following questions.
What does author mean by "Interfaces to and connectivity between component service roles can change, often at run-time" ? Request to explain with easy to undestand example.
What does author mean by two points mentioned on-demand challenges which mentioned above. Request elobartion on above two points.
Thanks for your time and help.
1.What does author mean by "Interfaces to and connectivity between component service roles can change, often at run-time" ?
I'm not sure exactly. Interfaces change overtime because:
New technology standards can be adopted - say moving from SOAP to REST, or form XML to JSON, but that would happen slowly overtime through deployment - where as for me "runtime" is a memory space in which things run, and I don't see interfaces changing themselves taht fast - otherwise how could anyone integrate with them?
The API or interface contract itself changes to fulfill business need.
2.What does author mean by two points mentioned on-demand challenges which mentioned above.
Hmmm, good design patterns tend to survive time well (they never change, because they are never broken - like SOLID). The book you are refering to was written in 2000 I think - a lot has changed since then, so whilst the pattern may survive maybe the way we'd now describe it has changed (i.e what he means by "export new services" is open to interpretation)...
1.First, an application must export new services, even though it may not know their detailed interfaces.
Separation Of Concerns (basic OO stuff), all parts of your app don't (shouldn't) inherently know what the other parts are doing internally; likewise, as long as someone (including an external system) is satisfying the interface then who cares how it does so internally.
2.Second, an application must integrate these services into its own control flow and processing sequence transparently and robustly, even
at run-time.
I take this to mean that the program should never break, it should always compile, and if the application is dynamically creating and executing code (say based on user input) then there needs to be checks in-place so that the dynamic code doesn't break the app either.

Ember.js or Backbone.js for Restful backend [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I already know that ember.js is a more heavy weight approach in contrast to backbone.js. I read a lot of articles about both.
I am asking myself, which framework works easier as frontend for a rails rest backend. For backbone.js I saw different approaches to call a rest backend. For ember it seems that I have to include some more libraries like 'data' or 'resources'. Why are there two libraries for this?
So whats the better choice? There arent a lot of examples to connect the frontend with the backend too. Whats a good working example for a backend rest call to this:
URI: ../restapi/topics
GET
auth credentials: admin/secrect
format: json
Contrary to popular opinion Ember.js isn't a 'more heavy weight approach' to Backbone.js. They're different kinds of tools that target totally different end products. Ember's sweet spot is applications where the user will keep the application open for long periods of time, perhaps all day, and interactions with the application's views or underlying data trigger deep changes in the view hierarchy. Ember is larger than Backbone, but thanks to Expires, Cache-Control this only matters on the the first load. After two days of daily use that extra 30k will be overshadowed by data transfers, sooner if your content involves images.
Backbone is ideal for applications with a small number of states where the view hierarchy remains relatively flat and where the user tends to access the app infrequently or for shorter periods of time. Backbone's code gets to remain short and sweet because it makes the assumption that the data backing the DOM will get thrown away and both items will be memory collected: https://github.com/documentcloud/backbone/issues/231#issuecomment-4452400 Backbone's smaller size also makes it better suited to brief interactions.
The apps people write in both frameworks reflect these uses: Ember.js apps include Square's web dashboard, Zendesk (at least the agent/ticketing interface), and Groupon's scheduler: all applications a user might spend all day working in.
Backbone apps focus more on brief or casual interactions, that are often just small sections of a larger static page: airbnb, Khan Academy, Foursquare's map and lists.
You can use Backbone to make the kinds of applications that Ember targets (e.g. Rdio) by a)
increasing the amount of application code you're responsible for to avoid problems like memory leaks or zombie events (I don't personally recommend this approach) or b) by adding 3rd party libraries like backbone.marionette or Coccyx – there are many of these libraries that all try to provide similar overlapping functionality and you'll probably end up assembling your own custom framework that is bigger and requires more glue code than if you'd just used Ember.
Ultimately the question of "which to use" has two answers.
First, "Which should I use, generally, in my career": Both, just like you'll end up learning any tools specific to work you'll want to do in the future. You'd never ask "Backbone or D3?"; "Backbone or Ember" is an equally silly question.
Second, "Which should I use, specifically, on my next project": Depends on the project. Both will communicate with a Rails server with equal ease. If your next project involves a mix of pages generated by the server with so-called "islands of richness" provided by JavaScript use Backbone. If your next project pushes all the interaction into the browser environment, use Ember.
To give a brief, simplified answer: for a RESTful backend, at the moment, you should use Backbone.
To give a more complex answer: It really depends on what you're doing. As others have said, Ember is designed for different things, and will appeal to a different set of people. My short answer is based on your inclusion of the RESTful requirement.
At the moment, Ember-Data (which seems to be the default persistence mechanism within Ember) is far from production ready. What this means is that it has quite a few bugs and, crucially, doesn't support nested URIs (/posts/2/comments/4556 for example). If REST is your requirement, then you'll have to work around this for the time being if you choose Ember (i.e. you'll either have to hack it in, wait, implement something like Ember-Data from scratch yourself, or use not-very-RESTful URIs). Ember-Data is not strictly part of Ember, so this is entirely possible.
The main differences between the two, aside from size, are basically:
Ember tries to do as much as possible for you, so that you don't have to write as much code. It is very hierarchical and, if your app is also very hierarchical, will likely be a good fit. Because it does so much for you, it can be difficult to figure out where bugs are coming from and to reason why unexpected behaviour is happening (there is a lot of "magic"). If you have an app that fits naturally into the type of app that Ember expects you to be building though, this likely won't be a problem.
Backbone tries to do as little as possible for you so that you can reason about what is going on and build an architecture that fits your app (rather than building an app that fits the architecture of the framework you're using). It's a lot easier to get started with but, unless you're careful, you can end up with a mess very quickly. It doesn't do stuff like computed properties, auto-unbinding events, etc and leaves them up to you, so you will need to implement a lot of stuff yourself (or at least pick libraries that do that for you), although that is rather the whole point.
Update: It appears that, as of recently, Ember does now support nested URIs, so I suppose the question comes down to how much magic you like and whether Ember is a good fit, architecturally, for your app.
I think that your question will soon be blocked :) There are a few contentions between the two frameworks.
Basically Backbone does not do a lot of things, and that's why I love it : you will have to code a lot, but you will code at the right place. Ember does a lot of things, so you'd better watch what it is doing.
Server discussion is one of the few things that Backbone does, and it does a great job with it. So I would start with Backbone and then give a try to Ember if you are not totally satisfied.
You can also listen to this podcast where Jeremy Ashkenas, creator of Backbone, and Yehuda Katz, member of Ember, have a nice discussion

SOA: Is it preferable to implement a service instead of just writing service-ready code, when no external access is needed?

I'm working on the initial architecture for a solution for which an SOA approach has been recommended by a previous consultant. From reading the Erl book(s) and applying to previous work with services (and good design patterns in general), I can see the benefits of such an approach. However, this particular group does not currently have any traditional needs for implementing web services -- there are no external consumers, and no integration with other applications.
What I'm wondering is, are there any advantages to going with web services strictly to stick to SOA, that we couldn't get from just implementing objects that are "service ready"?
To explain, an example. Let's say you implement the entity "Person" as a service. You have to implement:
1. Business object/logic
2. Translator to service data structure
3. Translator from service data structure
4. WSDL
5. Service data structure (XML/JSON/etc)
6. Assertions
Now, on the other hand, if you don't go with a service, you only have to implement #1, and make sure the other code accesses it through a loose reference (using dependency injection, or a wrapper, etc). Then, if it later becomes apparent that a service is needed, you can just have the reference instead point to #2/#3 logic above in a wrapper object (so all caller objects do not need updating), and implement the same amount of objects without a penalty to the amount of development you have to do -- no extra objects or code have to be created as opposed to doing it all up front.
So, if the amount of work that has to be done is the same whether the service is implemented initially or as-needed, and there is no current need for external access through a service, is there any reason to initially implement it as a service just to stick to SOA?
Generally speaking you'd be better to wait.
You could design and implement a web service which was simply a technical facade that exposes the underlying functionality - the question is would you just do a straight one for one 'reflection' of that underlying functionality? If yes - did you design that underlying thing in such a way that it's fit for external callers? Does the API make sense, does it expose members that should be private, etc.
Another factor to consider is do you really know what the callers of the service want or need? The risk you run with building a service is that (as you're basically only guessing) you might need to re-write it when the first customers / callers come along. This can could result in all sorts of work including test cases, backwards compatibility if it drives change down to the lower levels, and so on.
having said that the advantage of putting something out there is that it might help spark use of the service - get people thinking - a more agile principled approach.
If your application is an isolated Client type application (a UI that connects to a service just to get data out of the Database) implementing a SOA like architecture is usually overkill.
Nevertheless there could be security, maintainability, serviceability aspects where using web services is a must. e.g. some clients needs access to the data outside the firewall or you prefer to separate your business logic/data access from the UI and put it on 1 server so that you don’t need to re-deploy the app every time some bus. rules changes.
Entreprise applications require many components interacting with each other and many developers working on it. In this type of scénario using SOA type architecture is the way to go.
The main reason to adopt SOA is to reduce the dependencies.
Enterprise Applications usually depends on a lot of external components (logic or data) and you don’t want to integrate these components by sharing assemblies.
Imagine that you share a component that implements some specific calculation, would you deploy this component to all the dependent applications? What will happen if you want to change some calculation logic? Would you ask all teams to upgrade their references and recompile and redeploy their app?
I recently posted on my blog a story where the former Architect had also choosed not to use web services and thought that sharing assemblies was fine. The result was chaos. Read more here.
As I mentioned, it depends on your requirements. If it’s a monolithically application and you’re sure you’ll never integrate this app and that you’ll never reuse the bus. Logic/data access a 2 tier application (UI/DB) is good enough.
Nevertheless this is an Architectural decision and as most of the architectural decisions it’s costly to change. Of course you can still factor in a web service model later on but it’s not as easy as you could think. Refactoring an existing app to add a service layer is usually a difficult task to accomplish even when using a good design based on interfaces. Example of things that could go wrong: data structure that are not serializable, circular references in properties, constructor overloading, dependencies on some internal behaviors…

Relational databases application [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
When developing an application which mostly interacts with a database, what is a good way to start? The application requires a lot of filtering based on user input, sorting and structuring.
The best way to start is by figuring out "user stories" (or "use cases" -- but the "story" approach tends to really work great and start dragging shareholder into the shared storytelling...!-); on top of that, designing the database schema as the best-normalized idea you can find to satisfy all data layer needs of the user stories.
Thirdly, you may sketch layers such as views on top of the schema; fourthly, and optionally, triggers and stored procedures that might live in the DB to ensure consistency and ease of use for higher layers (but, no matter how strongly DBAs will push you towards those, don't accept their assurances that they're a MUST: they aren't -- if your storage layer is well designed in terms of normalization and maybe useful views on top, non-storage-layer functionality CAN always reside elsewhere, it's an issue of convenience and performance, NOT logical consistency, completeness, correctness).
I think the business layer and user-experience layers should come after. I realize that's a controversial position, but my point is that the user stories (and implied business-rules that come with them) have ALREADY told you a LOT about the business and user layers -- so, "nailing down" (relatively speaking -- agility and "embrace change!" should always rule;-) the data storage layer is the next order of business, and refining ("drilling down") the higher layers can and should come after.
When you get to the database layer you'll want to handle the database access via stored procedures. This will help give you additional protection against SQL Injection attacks, and make it much easier to push logic changes to the database layer.
If it's mostly users interacting with data, you can design using a form perspective.
What forms are needed for user input?
What forms are needed for output reports?
Once you've determined that, the use of the forms will dictate the business logic needed to be coded behind the scenes. You'll take the inputs, create the set of procedures or methods to deal with them, and output what is necessary. Once you know the inputs and outputs, you will be able to easily design the necessary functions.
The scope of the question is very broad. You are expecting me to tell what to do. I can only do a good job of telling how to do things. Do investigate upon using Hibernate/Spring. Since most of your operations looks like querying db, hibernate should help. Make sure the tables are sufficiently indexed so your queries can run faster if filtered based on index fields. The challenging task is design your DB layer which will be the glue between your application and db. Design your db layer generic enough so that it can build queries based on the params that you pass to it. Then move on to develop the above presentation layer. Developing your application layer by layer helps since it will force you to decouple the db logic from the presentation logic. When you develop the db layer, assume that not just your presentation layer but any client can call it. This will help you to design applications that can be scalable and adaptable to new requirements.
So bottom line : Start with DB, DB integeration layer, Controller and last Presentation Layer.
For the purpose of discussion, I'm going to assume that you are working with a starting application that doesn't have a pre-existing database. If this is false, I'd probably move the order of steps around quite a bit.
1 - Understand the Universe
First, you've got to get a sense of what's around you so you can really understand the problem that you are trying to solve.
User stories or use cases are often a good starting point. Starting with what tasks the user will try to do, and evaluating how frequently they are likely to be is a great starting point. I like to start with screen mockups as well, with or without lots of hands on time with users, I find that having a screen gives our team something really finite to argue about.
What other tools exist in this sphere? These days, it seems to me that users never use just one tool, they swap around alot. You need to know two main things about the other tools you users use:
(1) - what will they be using as part of the process, along side your tool? Consider direct input/output needs - what might they want to cut/copy/paste from or to? What tools might you want to offer file upload/download for with specific formats, what tools are they using alongside your tool that you might want to share terminology, layout, color coding, icons or other GUI elements with. Focus especially on the edges of the tools - a real gotcha I hit in a recent project was emulating the databases of previous tools. It turned out that we had massive database shift, and we would likely have been better starting fresh.
(2) What (if anything) are you replacing or competing with? Steal the good stuff, dump and improve the bad stuff. Asking users is always best. If you can't at least understanding the management initiative is important - is this tool replacing a horrible legacy tool? It may be legacy, but there may be the One True Feature that has kept the tool in business all these years...
At this stage, I find that things are really mushy - there's some screen shots, some writing, some schemas or ICDs - but not a really gelled clue.
2 - Logical Entities
Or at least that's what the OO books call it.
I don't care much for all the writing I see on this task - but I find that any any given system, I have one true diagram that I draw over and over. It's usually about 3-10 boxes, and hopefully less than an exponentially large number of lines connecting them. W
The earlier you can get that diagram the better.
It doesn't matter to me if it's in UML, a database logical model, something older, or on the back of a napkin (as long as the napkin is shrouded in plastic and hung where everyone can see it).
The earlier you can make this diagram correctly, the better.
After the diagram is made, you can start working on the follow on work that may be more official.
I think it's a chicken and egg question on whether you start with your data or you start with your screens and business logic. I know that you certianly want to optimize for database sizing and searchability... but how do you know exactly what your database needs are without screens and interfaces giving you a sense for the data?
In practice, I think this is an ever-churning cycle. You do a little bit everywhere, and then you change it all.
Even if you don't get to do a formal agile lifecycle, I think you're best bet is to view design as agile -- it will take many repetitions and arguments before you really feel it's "right".
The most important thing to keep in mind is that your first, and most likely 2nd 3rd attempt at designing the database will be wrong in some way. That might sound negative, maybe even a little rash, (it's certainly more towards the 'agile' software design philosophy) but it's important thing to keep in mind.
You still need to do your analysis thoroughly of course, try to implement one feature at a time, but try to get all layers working first. That way you won't have to do to much rework when the specs change and you understand the issues better. One you have a lot of data loaded into a system, changing things becomes increasingly difficult.
The main benefit of this approach is you find out quickly where you design is broken, where you haven't separated you design layers correctly. One trick I find extremely useful is to do both a sqllite and a mysql version, so seamless switching between the two is possible. Because the two use a different accent of SQL it highlights where you have too tight a coupling between the layers.
A good start would be to get familiar with Multitier architecture
Then you design your presentation layer.
In your business logic layer implement all logic
And finally you implement your data access layer.
Try to setup a prototype with something that is more productive then C++ for example Ruby, Python and well maybe even PHP.
When the prototype works and you see your data model is okay and your queries are too slow then you can start using C++.
But as your questions suggests you have more options then data and in this case the speed of a scripting langauge should be enough.