I'm struggling with the decision between a traditional backend (let's say a Django instance managing everything) and a service oriented architecture for a web app resembling LinkedIn. What I mean with SOA is having a completely independent data access interface - let's say Ruby + Sinatra - that queries the database, an independent chat application - Twisted - which is used via its API, a Django web server that uses those APIs for serving the content, etc.
I can see the advantages of having everything in the project modularized and accesed only via APIs: scalability, testing, etc. However, wouldn't this undermine the site performance? I imagine all modules would communicate via HTTP requests so wouldn't this arquitecture add a lot of latency to basically everything in the site? Is there a better alternative than HTTP?
Secondly, regarding development ease, would this really add much complexity to our developers? Specially during the first phase until we get an MVP.
Edit: We're a small team of two devs and a designer but we have no deadlines so we can handle a bit of extra work if it brings more technical value
Short answer, yes, SOA definitely trades encapsulation and reusability for latency. Long answer, it depends (as it always does) on how you do it.
How much latency affects your application is directly proportional to how fine-grained your services are. If you make very fine-grained services, you will have to make hundreds of sequential calls to assemble a user experience. If you make extremely coarse-grained services, you will not get any reusability out of your services; as they will be too tightly coupled to your application.
There are alternatives to HTTP, but if you are going to use something customized, you need to ask yourself, why are you using services at all? Why don't you just use libraries, and avoid the network layer completely?
You are definitely adding costs and complexities to your project by starting with an API. This has to balanced by the flexibility an API gives you. It might be a situation where you would benefit from internally structuring APIs to your code-base, but just invoking them as modules. Or building libraries instead of stand-alone APIs.
A lot of this depends on how big your project is. Are you a team of 1-3 devs cranking to get out your MVP? Or are you an enterprise with 20-100 devs that all need to figure out a way to divide up a project without stepping on each other?
Related
When building a microservice oriented application, i wonder what could be the appropriate microservice granularity.
Let's image an application consisting of:
A set of various resources types where each resource map a given business model. (ex: In a todo app resources could be User, TodoList and TodoItem...)
Each of those resources are saved within a NoSQL database that could be replicated.
Each of those resources are exposed through a REST Api
The application manage an internal chat room.
An Api gateway for gathering chat room and REST api interaction.
The application front end: an SPA application connected to the API Gateway
The first (and naive) approach when thinking about how microservices could match the need of this application would be:
One monolith service for managing EVERY resources and business logic:
By managing i mean providing the REST API for all of those resources and handling the persistance of those resources within the database.
One service for each Database replica.
One service providing the internal chat room using websocket or whatever.
One service for Authentification.
One service for the api gateway.
One service serving the static assets for the SPA front end.
An other approach could be to split service 1 into as many service as business models exist in the system. (let's call those services resource services)
I wonder what are the benefit of this second approach.
In fact i see a lot of downsides with this approach:
Need to setup an inter service communication process.
When requesting a service representing resource X that have a relation with resource Y, a lot more work are needed (i.e: interservice request)
More devops work.
More difficulty to share common code between resource services.
Where to put business logic ?
When starting a fresh project this second approach seams to me a bit of an over engineered work.
I feel like starting with the first approach and THEN split the monolith resource service into several specific services depending on the observed needs will minimize the complexity and risks.
What's your opinions regarding that ?
Is there any best practices ?
Thanks a lot !
The first approach is not microservice way, by definition.
And yes, idea is to split - each service for Bounded Context - One for Users, one for Inventory, Todo things etc etc.
The idea of microservices, at very simple, assumes:
You want to pay extra dev-ops work for modularity, and complete/as much as possible removal of dependencies between different bounded contexts (see dev/product/pjm teams).
It's idea lies around ownership, modularity, allowing separate teams develop their own piece of code, without requirement from them to know the rest of the system . As long as there is Umbiqutious Language (common set of conventions/communication protocols/terminology/documentation) they can work in completeley isolated, autotonmous fashion.
Maintaining, managing, testing, and develpoing become much faster - in cost of initial dev-ops and sophisticated architecture engeneering investment.
Sharing code should be minimal, and if required, could be done to represent the Umbiqutious Language (common communication interface/set of conventions). Sharing well-documented code, which acts as integration/infrastructure mini-framework, and have special dev/dev-ops/team attached to it ccould be easy business, as long as it, as i said, well-documented, and threated as separate architecture-related sub-project.
Properly engeneered Microservice architecture could lessen maintenance and development times by huge margin, but it requires quite serious reason to use it (there lot of reasons, and lots of articles on that, I wont start it here) and quite serious engeneering investment at start.
It brings modularity, concept of ownership, de-coupling of different contexts of your app.
My personal advise check if you really need MS architecture. If you can not invest engenerring though and dev-ops effort at start and do not have proper reasons for such system - why bother?
If you do need MS, i would really advise against the first method. You will develop wrong thing's, will miss the true challenges of MS, and could end with huge refactor, which could take more work than engeneering MS system from start properly. It's like to make square to make it fit into round bucket later.
Now answering your question title: granularity. (your question body bit different from your post title).
Attach it to Domain Model / Bounded Context. You can make meaty services at start, in order to avoid complex distributed transactions.
First just answer question if you need them in your design/architecture?
If not, probably you did a good design.
Passing reference ids between models from different microservices should suffice, and if not, try to rethink if more of complex transactions could be avoided.
If your system have unavoidable amount of distributed trasnactions, perhaps look towards using/making some CQRS mini-framework as your "shared code infrastructure component" / communication protocol.
It is the key problem of the microservices or any other SOA approach. It is where the theory meets the reality. In general you should not force the microservices architecture for the sake of it. This should rather naturally come from functional decomposition (top-down) and operational, technological, dev-ops needs (bottom-up). First approach is closer to what you would need to do, however at the first step do not focus so much on the technology aspect. Ask yourself why would you need to implement a separate service for particular business function. Treat it as a micro-application with all its technical resources. Ask yourself if there is reason to implement particular function as a full-stack app.
Some, of the functionalities you have mentioned in scenario 1) are naturally ok, such as 'authentication' service - this is probably good candidate.
For the business functions decomposition into separate service, focus on the 'dependencies' problem, if there are too many dependencies and you see that you have to implement bigger chunk of data mode - naturally this is not a micro service any more.
Try to put litmus test , if you can 'turn off' particular functionality and the system still makes sense - it is the candidate for service or further decomposition
We currently have a monolithic web application built with Scala (scalatra for the Rest APIs) for the backend and AngularJS for the front end. The application is deployed at AWS. We are going to build a new component, which we would like to build it as an independent microservice. And this component will have its own data repository which may not be the same type of DB. It will also be built with Scala as well, but Akka for the Rest APIs. The current application is built with DB module, domain module, and web service API module and front end/client module.
What is a good approach of a smooth journey? We possibly need to set up a micro service architecture first, such as an API gateway service along with others.
Too many ways, too many approaches, too many best practices. It really all depends on the analysis of your application, trying to figure out where the natural breaks are.
One place I start is looking at the data model. Lots of people advocate each microservice having its own database. Well, that's fine and dandy, but that can really be difficult to achieve without breaking things all over the place. But if you get lucky and there's a place where the data segregates nicely, than see what services would go with it and try breaking it out.
If you do not adhere to the separate database mentality, then I start with the low-hanging fruit, often times nothing more than simple CRUD operations with just a little business logic mixed in, providing some of the basic support for other larger-grained services to come. Of course, this becomes more iterative, not sure your organization will like it.
Which brings me to methodology. Organizations who've created monolithic applications often have methodologies that support them, whereas microservices require a much different approach to application development. Is your organization ready for that?
Needless to say, there's no right answer. I've gone to many conferences where these concepts are high on the interest list and the fact is there's no silver bullet, everyone has different ideas of what is right, and there's exceptions galore. You're just going to have to bite the bullet and cross your fingers, unfortunately.
I am taking over a project to replace an ancient legacy system from the ground up. Before I came on, the company hired a consultant who put together a basic sketch of the system and pushed SOA heavily. This resulted in a long list of "entity services", with the intention of them being composed into more complex service combinations. For instance, a user wanting committee info would hit the "Committee" service, which then calls the "Person" service to get its members, and the "Meeting" service to get its meetings, and so on.
I understand the flexibility gains in this, but my concerns are about performance. It seems to me that a system built with such a fine level of granularity to its services spends too many resources on translating service messages, and the performance will be unacceptable. It also seems to me that the flexibility gains can still be made using basic reusable objects, although in that case the benefit of a technology-agnostic interface is lost to gain performance.
For more background: The organization requesting this software does not currently have a stable of third-party software suites that need integrating with. This software will replace all suites. There are also currently no outside consumers who need to access the data outside of the provided website interface -- all service calls will be from other pieces inside our system. The choice of SOA in this case seems entirely based on the concept of "preparation".
So my question -- what level of granularity is acceptable in a stable of services without sacrificing performance? Am I being too skeptical of the performance hits we'll take implementing all our entities as services? Should functionality be available as web services only when they are needed, with the "preparation" focus instead going into designing the business layer for the probability of services later being dropped on top of it?
First off, finding the "sweet spot" in the number of services is difficult for sure. Too many services, and your integration costs suffer, too few, and your implementation costs suffer. You have to find a good balance.
My advice to you is to follow Juval Lowy's methodology in that you should break down your services by areas of volatility, or areas of change. This will give you your granularity level. You should also read his WCF book if you can.
As for the performance, WCF will inherently support many thousands of calls per second depending on your use cases and hardware. Services calling services is not a problem. The platform will support it if you do your part. For example, use the right binding for the right scenario (named pipes to call services on the same machine and TCP to call services across machine where possible). You should also implement a vertical slice of the application and do performance testing before building the rest of the application. This will verify your architecture.
When I say "Service", I mean the complete vertical component that can perform a complete independent operation. And I don't prefer going in more granularity unless there is exceptional requirement. In my view of SOA, A service should perform the meaningful business function that can be independently performed. A service should not require another service to complete its function.
What level of granularity is acceptable in a stable of services without sacrificing performance?
Individual entities. As described by the consultant.
Am I being too skeptical of the performance hits we'll take implementing all our entities as services?
Yes. Way too skeptical.
A decent framework can optimize some of these requests so that they don't involve a lot of network overheads.
As with SQL databases, the problems are largely solved. You'll find that the underlying applications that you're presenting as services are the bottlenecks. The SOA layer is largely plumbing. The bits still need to move through the pipes, the SOA layer just organizes them more intelligently than most of the alternatives.
Should services only be implemented when they are needed, with the "preparation" focus instead going into designing the business layer for the probability of services later being dropped on top of it?
Yes.
That's what "Agile" means.
Find a user story. Build just the services (and entities) for that story.
You will have some significant overhead for the first few stories in getting your SOA framework all squared away and deployable as a simple, repeatable release step.
Never do extensive "preparation" for things you "may" need in some improbable future. Read up on Agile and how to prioritize a backlog.
This question has been asked a few times on SO from what I found:
When should a web service not be used?
Web Service or DLL?
The answers helped but they were both a little pointed to a particular scenario. I wanted to get a more general thought on this.
When should a Web Service be considered over a Shared Library (DLL) and vice versa?
Library Advantages:
Native code = higher performance
Simplest thing that could possibly work
No risk of centralized service going down and impacting all consumers
Service Advantages:
Everyone gets upgrades immediately and transparently (unless versioned API offerred)
Consumers cannot decompile the code
Can scale service hardware separately
Technology agnostic. With a shared library, consumers must utilize a compatible technology.
More secure. The UI tier can call the service which sits behind a firewall instead of directly accessing the DB.
My thought on this:
A Web Service was designed for machine interop and to reach an audience
easily by using HTTP as the means of transport.
A strong point is that by publishing the service you are also opening the use of the
service to an audience that is potentially vast (over the web or at least throughout the
entire company) and/or largely outside of your control / influence / communication channel
and you don't mind or this is desired. The usage of the service is much easier as clients
simply have to have an internet connection and consume the service. Unlike a library which
may not be so easily done (but can be done). The usage of the service is largely open. You are making it available to whomever feels they could use it and however they feel to use it.
However, a web service is in general slower and is dependent on an internet connection.
It's in general harder to test than a code library.
It may be harder to maintain. Much of that depends on your maintainance and coding practices.
I would consider a web service if several of the above features are desired or at least one of them
is considered paramount and the downsides were acceptable or a necessary evil.
What about a Shared Library?
What if you are far more in "control" of your environment or want to be? You know who will be using the code
(interface isn't a problem to maintain), you don't have to worry about interop. You are in a situation where
you can easily achieve sharing without a lot of work / hoops to jump through.
Examples in my mind of when to use:
You have many applications in your control all hosted on the same server or two that will use the library.
Not so good example, you have many applications but all hosted on a dozen or so servers. Web Service may be a better choice.
You are not sure who or how your code could be used but know it is of good value to many. Web Service.
You are writing something only used by a limited set of applications, perhaps some helper functions. Library.
You are writing something highly specialized and is not suited for consumption by many. Such as an API for your Line of Business
Application that no one else will ever use. Library.
If all things being equal, it would be easier to start with a shared library and turn it into a web service but not so much vice versa.
There are many more but these are some of my thoughts on it...
Based on multiple sources...
Common Shared Library
Should provide a set of well-known operations that perform common tasks (e.g., String parsing, numerical manipulations, builders)
Should Encapsulate common reusable code
Have minimal dependencies on other libraries
Provide stable interfaces
Services
Should provide reusable application-components
Provide common business services (e.g., rate-of-return calculations, performance reports, or transaction history services)
May be used to connect existing software from disparate systems or exchange data between applications
Here are 5 options and reasons to use them.
Service
has peristent state
you need to release updates often
solves major business problem and owns data related to it
need security: user can't see your code, user can't access you storage
need agnostic intereface like REST (you can auto generate shallow REST clients for client languages esily)
need to scale separately
Library
you simply need a collection of resusaable code
needs to run on client side
can't tolerate any downtime
can't tolerate even few milliseconds of latency
simplest solution that couldd possibly work
need to ship code to data (high thoughput or map-reduce)
First provide library. Then service if need arises.
agile approach, you start with simplest solution than expand
needs might evolve and become more like "Service" cases
Library that starts local service.
many apps on the host need to connect to it and send some data to it
Neither
you can't seriously justify even the library case
business value is questionable
Ideally if I want both advantages, I'll need a portable library, with the agnostic interface glue, automatically updated, with obfuscated (hard to decompile) or secure in-house environment.
Possible using both webservice and library to turn it viable.
Not having dealt much with creating web-services, either from scratch, or by breaking apart an existing application, where does one start? Should a web-service encapsulate an entity, much like a class does, or should the service have more/less to it?
I realize that much of this is based on a case by case analysis of what the needs are, but are there any general guide-lines or best practices or even small nuggets of information that web-service veterans can impart to a relative newbie?
Our web services are built around functional areas. Sometimes this is just for a single entity, sometimes it's more than that.
For example, if you have a CRM, one of your web services might revolve around managing Contacts. Creating, updating, searching for, etc. If you do some type of batch type processing, a web service might exist to create and submit a job.
As far as best practices, bear in mind that web services add to the processing overhead. Mainly in serializing / deserializing the data as it goes across the wire. Because of this the main upside is solely in scalability. Meaning that you trade an increased per transaction processing time for the ability to run the service through multiple machines.
The main parts to pull out into a web service are those areas which are common across multiple applications, or which you intend to expose publicly, or which would benefit from greater load balancing.
Of course, you need to analyze your application to see where any bottlenecks really are. In some cases it doesn't make sense. For example, if you have a single application that isn't sharing its code and/or the bottleneck is primarily database related.
Web Services are exactly what they sound like Services for the Web.
A web service should be built as an API for the service layer of your app.
A service usually encapsulates an entity larger than a single class.
To learn more about service layers and refactoring to add a service layer read about DDD.
Good Luck
The number 1 question is: To what end are you refactoring your application functionality to be consumned as a bunch of web services?