Orchestrated vs Choreographed Service-Oriented Architecture in large scale? - web-services

I'm an architect in a large scale financial company and we are in the beginning of implementing a new business oriented infosystem across our different countries.
From the very early on the core idea has been to follow microservice oriented principles as much as possible (and making sure engineers have read Building Microservices book by Sam Newman).
By now I've come to crossroads. Our services are primarily JSON REST services using Swagger for automated documentation, but in order to use these services in our business processes and making sure not to write business logic into services outside the domain of those services, we've been using Camunda as an orchestration tool. And Camunda is fine (though some have considered Corezoid as an alternative), but somewhat clumsy in what is an otherwise an elegant set of services.
Now service orchestration is a concept pretty familiar to most engineers. But it is one that I am not entirely happy with due to still having a central engine that drives everything. It is incredibly expensive to replace later down the road (though still cheaper to replace than a monolith). And even if this central engine is split into multiple engines (which is actually the case today), it does not necessarily make it much better.
In recent years there has been a movement with microservices towards choreographed (close to event-driven) architecture. It is at this point where I am looking for advice from engineers and architects who have faced similar crossroad decision points.
I absolutely love the idea of decoupled architecture and despite feeling good about killing monoliths and having elegant independent services, I still detect a lot of dependencies in business process as a whole in current orchestrated solution in where it should not actually exist.
And it's not like we are avoiding events. We have actually implemented events on our architecture as well in order to decouple many processes with the core principle that if you don't need a synchronized response and just need to notify of something happening to initiate another process an event is put up that may be caught by another process that starts executing. And orchestration is easier to explain and visualize, it is easier to tweak and modify by more technical minded business users. And I think it is easier to test and validate from business perspective. Orchestrated architecture like this also (usually) expects a good service discovery and quality automated documentation and non-functional requirements which are all things I value greatly.
All of those things that are a question to me in choreographed approach since I don't have first-hand experience in running this in large scale - just some local test prototypes.
But I think you see where I am coming from. I'm trying to consider alternatives without having to regret driving the company all the other way in the end.
Perhaps you can share your own experience with a similar situation or share an interesting link or two? Or am I looking for a silver bullet that doesn't exist yet?

Services need to interact - services that don't interact are not part of the same system. The search needs access to the catalog, the cart doesn't get the price info from the page, the account needs the purchase history, the recommender needs purchase history, the cart needs to verify the currently available coupons, the inventory needs to know something was purchased etc.
Service boundaries are set to minimize the needed interactions. It can make sense to cut a service to smaller components but if they share a database (internal structure) they are different aspects of the service.
When services interact it creates a level of coupling - at the least, this coupling is some API (JSON or otherwise) that the service has to "maintain" for so other services can interact with it.
Another coupling type is temporal coupling - which is what you get in request-reply situations (and you can eliminate in event driven systems) However, Orchestration vs. Choreography is not about these differences (even though orchestration is mostly associated with request/reply) - it is about central control and governance vs.flexibility and serendipity.
Orchestration has risks like migrating business logic out of services into the orchestration while choreography runs the risk of chaos. By the way, direct request/reply integration has the worst of both worlds but wins on simplicity when systems are small enough.
Choosing between the two is a balancing act (like most architectural decisions) for instance, Netflix built on choreography for a lot of time but then found they need some control back and introduced an orchestration engine. Nothing is a silver bullet :)
Personally, I like choreography better because of the reduced coupling and flexibility and favor tools like open Zipkin to bring some order into the chaos.
You can see a partial example for an orchestration based arch in slides 10-22 of a presentation I did about microservices

I think I understand where you're coming from, having recently redesigned a system to a "microservices" architecture. I like (and use) the approach by these guys: http://scs-architecture.org/
The main point is, that you try to avoid cross-dependencies between you "services", which basically makes choreography obsolete. The hard part is decomposing your problem domain into chunks which do not need eachother for any of the executed business-cases. They may need different kinds of data that may or may not be "shared", as in present in multiple systems, but they don't need synchronous calls between them for any given business case.
This is quite different from what Netflix is doing for example. Those guys/gals are doing chain-calling through different layers of services, each adding its logic to the "process". This model might fit in some cases, and probably fits in Netflix's case. But it may not be necessary for you.
The ideal Self-Contained System would be completely independent of other Self-Contained Systems, would cover one or more highly cohesive business functions (in full depth from the UI to Persistence!), and would be not calling any other system synchronously. The ideal system would let the client "orchestrate", by just offering links through its Web (HTML) interface.
Think more like Amazon. The "Landing Page" is a different application than the "Search", which is still different from the "Checkout". They are completely different, sometimes even look a bit different! Integrated by links and forms in HTML, not explicitly orchestrated.
This might be what you are looking for.
Some warnings: First instinct of some people is to have "Customer" microservice, or "Product Repository" microservice, and similar. This will not lead to Self-Contained Systems, as you will need synchronous calls to these things, making them essentially "central" components. The key is to split the business domain, so bounded contexts a la Eric Evans.

Related

Microservice granularity: Per domain model or not?

When building a microservice oriented application, i wonder what could be the appropriate microservice granularity.
Let's image an application consisting of:
A set of various resources types where each resource map a given business model. (ex: In a todo app resources could be User, TodoList and TodoItem...)
Each of those resources are saved within a NoSQL database that could be replicated.
Each of those resources are exposed through a REST Api
The application manage an internal chat room.
An Api gateway for gathering chat room and REST api interaction.
The application front end: an SPA application connected to the API Gateway
The first (and naive) approach when thinking about how microservices could match the need of this application would be:
One monolith service for managing EVERY resources and business logic:
By managing i mean providing the REST API for all of those resources and handling the persistance of those resources within the database.
One service for each Database replica.
One service providing the internal chat room using websocket or whatever.
One service for Authentification.
One service for the api gateway.
One service serving the static assets for the SPA front end.
An other approach could be to split service 1 into as many service as business models exist in the system. (let's call those services resource services)
I wonder what are the benefit of this second approach.
In fact i see a lot of downsides with this approach:
Need to setup an inter service communication process.
When requesting a service representing resource X that have a relation with resource Y, a lot more work are needed (i.e: interservice request)
More devops work.
More difficulty to share common code between resource services.
Where to put business logic ?
When starting a fresh project this second approach seams to me a bit of an over engineered work.
I feel like starting with the first approach and THEN split the monolith resource service into several specific services depending on the observed needs will minimize the complexity and risks.
What's your opinions regarding that ?
Is there any best practices ?
Thanks a lot !
The first approach is not microservice way, by definition.
And yes, idea is to split - each service for Bounded Context - One for Users, one for Inventory, Todo things etc etc.
The idea of microservices, at very simple, assumes:
You want to pay extra dev-ops work for modularity, and complete/as much as possible removal of dependencies between different bounded contexts (see dev/product/pjm teams).
It's idea lies around ownership, modularity, allowing separate teams develop their own piece of code, without requirement from them to know the rest of the system . As long as there is Umbiqutious Language (common set of conventions/communication protocols/terminology/documentation) they can work in completeley isolated, autotonmous fashion.
Maintaining, managing, testing, and develpoing become much faster - in cost of initial dev-ops and sophisticated architecture engeneering investment.
Sharing code should be minimal, and if required, could be done to represent the Umbiqutious Language (common communication interface/set of conventions). Sharing well-documented code, which acts as integration/infrastructure mini-framework, and have special dev/dev-ops/team attached to it ccould be easy business, as long as it, as i said, well-documented, and threated as separate architecture-related sub-project.
Properly engeneered Microservice architecture could lessen maintenance and development times by huge margin, but it requires quite serious reason to use it (there lot of reasons, and lots of articles on that, I wont start it here) and quite serious engeneering investment at start.
It brings modularity, concept of ownership, de-coupling of different contexts of your app.
My personal advise check if you really need MS architecture. If you can not invest engenerring though and dev-ops effort at start and do not have proper reasons for such system - why bother?
If you do need MS, i would really advise against the first method. You will develop wrong thing's, will miss the true challenges of MS, and could end with huge refactor, which could take more work than engeneering MS system from start properly. It's like to make square to make it fit into round bucket later.
Now answering your question title: granularity. (your question body bit different from your post title).
Attach it to Domain Model / Bounded Context. You can make meaty services at start, in order to avoid complex distributed transactions.
First just answer question if you need them in your design/architecture?
If not, probably you did a good design.
Passing reference ids between models from different microservices should suffice, and if not, try to rethink if more of complex transactions could be avoided.
If your system have unavoidable amount of distributed trasnactions, perhaps look towards using/making some CQRS mini-framework as your "shared code infrastructure component" / communication protocol.
It is the key problem of the microservices or any other SOA approach. It is where the theory meets the reality. In general you should not force the microservices architecture for the sake of it. This should rather naturally come from functional decomposition (top-down) and operational, technological, dev-ops needs (bottom-up). First approach is closer to what you would need to do, however at the first step do not focus so much on the technology aspect. Ask yourself why would you need to implement a separate service for particular business function. Treat it as a micro-application with all its technical resources. Ask yourself if there is reason to implement particular function as a full-stack app.
Some, of the functionalities you have mentioned in scenario 1) are naturally ok, such as 'authentication' service - this is probably good candidate.
For the business functions decomposition into separate service, focus on the 'dependencies' problem, if there are too many dependencies and you see that you have to implement bigger chunk of data mode - naturally this is not a micro service any more.
Try to put litmus test , if you can 'turn off' particular functionality and the system still makes sense - it is the candidate for service or further decomposition

Transferring from a monolithic application to a micro service one - approach

We currently have a monolithic web application built with Scala (scalatra for the Rest APIs) for the backend and AngularJS for the front end. The application is deployed at AWS. We are going to build a new component, which we would like to build it as an independent microservice. And this component will have its own data repository which may not be the same type of DB. It will also be built with Scala as well, but Akka for the Rest APIs. The current application is built with DB module, domain module, and web service API module and front end/client module.
What is a good approach of a smooth journey? We possibly need to set up a micro service architecture first, such as an API gateway service along with others.
Too many ways, too many approaches, too many best practices. It really all depends on the analysis of your application, trying to figure out where the natural breaks are.
One place I start is looking at the data model. Lots of people advocate each microservice having its own database. Well, that's fine and dandy, but that can really be difficult to achieve without breaking things all over the place. But if you get lucky and there's a place where the data segregates nicely, than see what services would go with it and try breaking it out.
If you do not adhere to the separate database mentality, then I start with the low-hanging fruit, often times nothing more than simple CRUD operations with just a little business logic mixed in, providing some of the basic support for other larger-grained services to come. Of course, this becomes more iterative, not sure your organization will like it.
Which brings me to methodology. Organizations who've created monolithic applications often have methodologies that support them, whereas microservices require a much different approach to application development. Is your organization ready for that?
Needless to say, there's no right answer. I've gone to many conferences where these concepts are high on the interest list and the fact is there's no silver bullet, everyone has different ideas of what is right, and there's exceptions galore. You're just going to have to bite the bullet and cross your fingers, unfortunately.

SOA - How granular should services be to maintain performance?

I am taking over a project to replace an ancient legacy system from the ground up. Before I came on, the company hired a consultant who put together a basic sketch of the system and pushed SOA heavily. This resulted in a long list of "entity services", with the intention of them being composed into more complex service combinations. For instance, a user wanting committee info would hit the "Committee" service, which then calls the "Person" service to get its members, and the "Meeting" service to get its meetings, and so on.
I understand the flexibility gains in this, but my concerns are about performance. It seems to me that a system built with such a fine level of granularity to its services spends too many resources on translating service messages, and the performance will be unacceptable. It also seems to me that the flexibility gains can still be made using basic reusable objects, although in that case the benefit of a technology-agnostic interface is lost to gain performance.
For more background: The organization requesting this software does not currently have a stable of third-party software suites that need integrating with. This software will replace all suites. There are also currently no outside consumers who need to access the data outside of the provided website interface -- all service calls will be from other pieces inside our system. The choice of SOA in this case seems entirely based on the concept of "preparation".
So my question -- what level of granularity is acceptable in a stable of services without sacrificing performance? Am I being too skeptical of the performance hits we'll take implementing all our entities as services? Should functionality be available as web services only when they are needed, with the "preparation" focus instead going into designing the business layer for the probability of services later being dropped on top of it?
First off, finding the "sweet spot" in the number of services is difficult for sure. Too many services, and your integration costs suffer, too few, and your implementation costs suffer. You have to find a good balance.
My advice to you is to follow Juval Lowy's methodology in that you should break down your services by areas of volatility, or areas of change. This will give you your granularity level. You should also read his WCF book if you can.
As for the performance, WCF will inherently support many thousands of calls per second depending on your use cases and hardware. Services calling services is not a problem. The platform will support it if you do your part. For example, use the right binding for the right scenario (named pipes to call services on the same machine and TCP to call services across machine where possible). You should also implement a vertical slice of the application and do performance testing before building the rest of the application. This will verify your architecture.
When I say "Service", I mean the complete vertical component that can perform a complete independent operation. And I don't prefer going in more granularity unless there is exceptional requirement. In my view of SOA, A service should perform the meaningful business function that can be independently performed. A service should not require another service to complete its function.
What level of granularity is acceptable in a stable of services without sacrificing performance?
Individual entities. As described by the consultant.
Am I being too skeptical of the performance hits we'll take implementing all our entities as services?
Yes. Way too skeptical.
A decent framework can optimize some of these requests so that they don't involve a lot of network overheads.
As with SQL databases, the problems are largely solved. You'll find that the underlying applications that you're presenting as services are the bottlenecks. The SOA layer is largely plumbing. The bits still need to move through the pipes, the SOA layer just organizes them more intelligently than most of the alternatives.
Should services only be implemented when they are needed, with the "preparation" focus instead going into designing the business layer for the probability of services later being dropped on top of it?
Yes.
That's what "Agile" means.
Find a user story. Build just the services (and entities) for that story.
You will have some significant overhead for the first few stories in getting your SOA framework all squared away and deployable as a simple, repeatable release step.
Never do extensive "preparation" for things you "may" need in some improbable future. Read up on Agile and how to prioritize a backlog.

How does one go about breaking a monolithic application into web-services?

Not having dealt much with creating web-services, either from scratch, or by breaking apart an existing application, where does one start? Should a web-service encapsulate an entity, much like a class does, or should the service have more/less to it?
I realize that much of this is based on a case by case analysis of what the needs are, but are there any general guide-lines or best practices or even small nuggets of information that web-service veterans can impart to a relative newbie?
Our web services are built around functional areas. Sometimes this is just for a single entity, sometimes it's more than that.
For example, if you have a CRM, one of your web services might revolve around managing Contacts. Creating, updating, searching for, etc. If you do some type of batch type processing, a web service might exist to create and submit a job.
As far as best practices, bear in mind that web services add to the processing overhead. Mainly in serializing / deserializing the data as it goes across the wire. Because of this the main upside is solely in scalability. Meaning that you trade an increased per transaction processing time for the ability to run the service through multiple machines.
The main parts to pull out into a web service are those areas which are common across multiple applications, or which you intend to expose publicly, or which would benefit from greater load balancing.
Of course, you need to analyze your application to see where any bottlenecks really are. In some cases it doesn't make sense. For example, if you have a single application that isn't sharing its code and/or the bottleneck is primarily database related.
Web Services are exactly what they sound like Services for the Web.
A web service should be built as an API for the service layer of your app.
A service usually encapsulates an entity larger than a single class.
To learn more about service layers and refactoring to add a service layer read about DDD.
Good Luck
The number 1 question is: To what end are you refactoring your application functionality to be consumned as a bunch of web services?

n-tier design with website and backend transaction processor

We have a website, where transactions are entered in and put through a workflow. We are going to follow the standard BLL(Business Logic Layer), DTO(Data Transfer Object), DAL(Data Access Layer) etc. for a tiered application. We have the need to separate everything out because some transactions will cross multiple applications with different business logic.
We also have a backend processor. It handles our transactions once the workflow has been completed. It works with various third party systems, some of which are unstable, or the interface to them is unstable, and then reports the status of the transaction. Each website will have its own version of the backend processor.
Now the question, with N-Tier, they suggest a new BLL for each application. With the layout of the application above, it can be argued that the backend processor and website is one application acting in unison, or two applications with different business logic. What would be the ideal way to handle this? Have it act like one system, or two?
One thing that I picked up on while learning MVC over the last couple years is the difference between what I call application logic and domain logic. I don't like the term business logic anymore, because it has too much baggage from all the conflicting theories and practices that have used that term too loosely.
Domain logic is the "traditional" business logic, how things are supposed to act, what they require (validation), etc. Application logic is anything that is specific to a given presentation of your domain, IE when the user clicks this submit button in your web app then they are directed to this web page over here (note that this has nothing to do with how a WinForms app or a background processor would work). Application logic should live in your application. Domain logic should live in your BLL and lower, and be reusable across the different applications that may use your common "business logic".
Kind of a general answer, but I hope that helps.
You might consider partitioning the functionality to reflect the organization of the stakeholders. Usually if you have two distinct organizational groups, then development and administration requirements are easier to manage if the functionality is similarly partioned. And vise versa.
Most of us don't spend that much time writing applications that explore the outer boundaries of hardware and software capabilities.
If you separate your concerns well then I think that you will be able to view them as the same application with a single business logic layer, there is no point writing the same code twice. The trick will be forcing the separation of concerns between the user interface portions of the website and the business logic in your BLL library.
Performance is going to be an issue as well, you have to ensure that your batch processing doesn't block your website from performing tasks that it needs to perform due to your resources. This may be an argument to keep them more separate, however as they're likely sharing a database anyway (or some other file based resource) then that may be an issue regardless.
I would keep a common business logic library programmed to interfaces and fully separated from your other concerns.
The "Ideal" way to do this depends on the project at hand and the various requirements of the system.
My default design is to have it act as one app. But if there are more heavyweight processes taking place, I like to create a batching process where the parameters of the requested job are stored and acted upon by a seperate process.