How to enforce entity dependencies in SOA environment - build / download? - web-services

When establishing several modular and independent services, I am challenged with dependencies / stored relationships between entities. Consider Job Position and Employee. In my system, the Employee's Assignment is linked (URI) to the Job Position.
For our application, the Job Positions would be managed by a separate service than the Employee service, which leads to the challenge of constraints to prevent inadvertent removal of a Job Position if an employee is already matched to that position.
I've designed a custom solution leveraging a Registry (which should have dependency details, etc.) and enforce a paradigm across the inter-dependent services, however it is complex. In the SOA environment, how could one manage these inter-dependencies?
Many thanks in advance!

In some ways your question could be rephrased as "How to enforce referential integrity in SOA environment". Well the answer is you can't. That's kind of a by-product of the Autonomous in the tenets of SOA.
So almost by definition, the Job Position in the Employee service is not the same thing as the Job Position in the Job Position service. This is actually a good thing. Even though both services define Job Position, they do so from two different capabilities, and are free to develop and evolve their capability as needs arise.
So, hard constraints on the removal of data within one service boundary based on the existence of similar data inside another service boundary are just not possible (or even desirable).
This is all very well, but then how do you avoid the situation where Employees may be "matched" to a Job Position which has changed in some way, either via removal or update?
Well, services can be interested in changes to other services. And in these situations, services can become consumers of each other. It's fairly obvious the Employee capability would be interested in changes to the Job Position capability.
Events are actually a fairly well used design pattern for this scenario. If a business action results in a change the data of a service, that service can publish an event message which describes the change. Other services can become consumers of this type of event and can handle it in their own fashion. Because eventing is usually implemented with a pub-sub semantic, any service capability which so desires can subscribe to the event.
In your example, the event which could be published if a job position was deleted could be defined as (using C#):
class JobPositionRemoved
{
int JobPositionId { get; set; }
string JobPositionName { get; set; }
...
}
How a consumer of this event actually handles it (what action would be taken by the consumer) is another question and would depend on the capability of the consumer. As an example, your Employee service could gather a list of the Employees with this job position and flag them for review, or add them to a queue for "job position reassignment".
Your event could even include a field called int ReplacedByJobPosition which would enable consumers to automatically update any capability that depended on the removed job position.
As long as your event is delivered across a fault-tolerant transport (such as message queuing), you can be fairly confident that while you won't have referential integrity between your service capabilities, your system as a whole should become consistent eventually.
By using events in this way, you also avoid the need for a centralized registry of inter-dependencies (which sounds like a nasty idea). Each service is responsible for publishing events about changes to it's own data, and dependencies are defined by services consuming events from each other.
Hope this is helpful.
EDIT
In answer to your comment - while I can see the benefit of having another service taking care of the position:reassignment problem and I don't see any massive problems with this, there are a few considerations.
One of the reasons why service boundaries and business capability boundaries are a natural fit is that when you change a business capability (eg a change in Billing procedure) it does not generally impact other business capabilities (CRM/Finance/etc). By introducing shared services you're coupled to more than one capability, your service doesn't have well defined boundaries, and as a result has a higher cost of ownership as it will need to be changed a lot.
Additionally you could argue that the consumer of a business event (eg, JobPositionRemoved) should take responsibility for the entire handling of that event.
The handling of the event may well trigger a subsequent event to be published (such as ReviewTaskCreatedForEmployeeChange) which can then be handled by another consumer (eg a workflow tool) if desired.

Related

What would be the Pattern to display all existing Actors

I programmed an Akka Application that realises Device Management. Every device is an Akka Actor and I implemented Akka Finite State Machine to control the lifecycle of Device, like FUNCTIONAL, BROKEN, IN_REPAIRS, RETIRED, etc...and I persist the devices with Akka Persistence to Cassandra.
Everything works like a dream but I have dilemma and I like to ask what would be pattern to deal with Akka.
I would nearly have 1 000 000 Devices, Akka is ideal to manage those single instances but how I implement that if user one to see all devices system and select one, change it is state...
I can't show it from Akka Journal table, I would not be able show anything other than persistenceId.
So how would you handle this dilemma.
My current plan, while all events coming to my system from Kafka, consume also these messages from Topic and redirect those to Solr/Elasticsearch, so I can index it some metadata with persistenceId, so user can select a Device to process with Akka Actor.
Do you have a better idea or how do you solve this idea?
Another option to save this information Cassandra to another Keyspace but for some reason I don't fancy it.....
Thx for answers...
Akka persistence is for managing Actor state so that it can be resilient with failures of application ( https://www.reactivemanifesto.org/).May not be optimal for using it for business cases. I understood that your requirement is to able to browse Actors in system. I see couple of options:
Option1:
Akka supports feature called named actors (https://doc.akka.io/docs/akka/current/general/addressing.html). In your case you have device to Actor as one to one mapping. So you can take advantage of this using with names actors feature. During the actors creation in actor system ,you apply this pattern so that all your actors in system are named with device ids.Now you can browse all your device ids (As this is your use case details, you can have searchable module using Solar/Elastic Search as you mentioned). Whenever browsing devices means you are browsing Actors in your system. You can use this named actor path to retrieve actor from system and do some actions.
Option2:
You can use monitoring tools for trace/browse actors in the application. Beyond your need it provides several other useful metrics.
https://www.lightbend.com/blog/akka-monitoring-telemetry
https://kamon.io/solutions/monitoring-for-akka/
Akka Persistence is heavily oriented to the Command-Query Responsibility Segregation style of implementing systems. There are plenty of great outlines describing this pattern if you want more depth, but the broad idea is that you divide responsibility for changing data (the intent to change data being modeled through commands) from responsibility for querying data. In some cases this responsibility carries through to separately deployed services, but it doesn't have to (the more separated, in terms of deployment/operations or development, the less coupled they are, so there's a cost/benefit tradeoff for where you want to be on the level-of-segregation spectrum).
Typically the portion of the system which is handling commands and deciding how (or even if) a given command updates state is often called the "write-side". In your application, the FSM actors modeling the state of a device and persisting changes would be the write-side, and you seem to have that part down pat.
The portion handling the queries is, correspondingly, often called the "read-side", and one key benefit is that it can use a different data model than the write-side, up to and including using a different data store (e.g. Solr/Elasticsearch).
Since you're using Akka Persistence and event-sourcing (judging from mentioning the journal table), Akka Projections provides a good opinionated wrapper for publishing events from the write-side to Kafka for another service to update a Solr/Elasticsearch read-side with. It does require (at least at this time) that your write-side tag events; with some effort you can do something similar by combining the persistenceIds and eventsByPersistenceId query streams to feed events from the write-side to Kafka without having to tag.
Note that when going down the CQRS path, you are generally committing to some level of eventual consistency between the write-side and the read-side.

AWS Event-Sourcing implementation

I'm quite a newbe in microservices and Event-Sourcing and I was trying to figure out a way to deploy a whole system on AWS.
As far as I know there are two ways to implement an Event-Driven architecture:
Using AWS Kinesis Data Stream
Using AWS SNS + SQS
So my base strategy is that every command is converted to an event which is stored in DynamoDB and exploit DynamoDB Streams to notify other microservices about a new event. But how? Which of the previous two solutions should I use?
The first one has the advanteges of:
Message ordering
At least one delivery
But the disadvantages are quite problematic:
No built-in autoscaling (you can achieve it using triggers)
No message visibility functionality (apparently, asking to confirm that)
No topic subscription
Very strict read transactions: you can improve it using multiple shards from what I read here you must have a not well defined number of lamdas with different invocation priorities and a not well defined strategy to avoid duplicate processing across multiple instances of the same microservice.
The second one has the advanteges of:
Is completely managed
Very high TPS
Topic subscriptions
Message visibility functionality
Drawbacks:
SQS messages are best-effort ordering, still no idea of what they means.
It says "A standard queue makes a best effort to preserve the order of messages, but more than one copy of a message might be delivered out of order".
Does it means that giving n copies of a message the first copy is delivered in order while the others are delivered unordered compared to the other messages' copies? Or "more that one" could be "all"?
A very big thanks for every kind of advice!
I'm quite a newbe in microservices and Event-Sourcing
Review Greg Young's talk Polygot Data for more insight into what follows.
Sharing events across service boundaries has two basic approaches - a push model and a pull model. For subscribers that care about the ordering of events, a pull model is "simpler" to maintain.
The basic idea being that each subscriber tracks its own high water mark for how many events in a stream it has processed, and queries an ordered representation of the event list to get updates.
In AWS, you would normally get this representation by querying the authoritative service for the updated event list (the implementation of which could include paging). The service might provide the list of events by querying dynamodb directly, or by getting the most recent key from DynamoDB, and then looking up cached representations of the events in S3.
In this approach, the "events" that are being pushed out of the system are really just notifications, allowing the subscribers to reduce the latency between the write into Dynamo and their own read.
I would normally reach for SNS (fan-out) for broadcasting notifications. Consumers that need bookkeeping support for which notifications they have handled would use SQS. But the primary channel for communicating the ordered events is pull.
I myself haven't looked hard at Kinesis - there's some general discussion in earlier questions -- but I think Kevin Sookocheff is onto something when he writes
...if you dig a little deeper you will find that Kinesis is well suited for a very particular use case, and if your application doesn’t fit this use case, Kinesis may be a lot more trouble than it’s worth.
Kinesis’ primary use case is collecting, storing and processing real-time continuous data streams. Data streams are data that are generated continuously by thousands of data sources, which typically send in the data records simultaneously, and in small sizes (order of Kilobytes).
Another thing: the fact that I'm accessing data from another
microservice stream is an anti-pattern, isn't it?
Well, part of the point of dividing a system into microservices is to reduce the coupling between the capabilities of the system. Accessing data across the microservice boundaries increases the coupling. So there's some tension there.
But basically if I'm using a pull model I need to read
data from other microservices' stream. Is it avoidable?
If you query the service you need for the information, rather than digging it out of the stream yourself, you reduce the coupling -- much like asking a service for data rather than reaching into an RDBMS and querying the tables yourself.
If you can avoid sharing the information between services at all, then you get even less coupling.
(Naive example: order fulfillment needs to know when an order has been paid for; so it needs a correlation id when the payment is made, but it doesn't need any of the other billing details.)

Microservice granularity: Per domain model or not?

When building a microservice oriented application, i wonder what could be the appropriate microservice granularity.
Let's image an application consisting of:
A set of various resources types where each resource map a given business model. (ex: In a todo app resources could be User, TodoList and TodoItem...)
Each of those resources are saved within a NoSQL database that could be replicated.
Each of those resources are exposed through a REST Api
The application manage an internal chat room.
An Api gateway for gathering chat room and REST api interaction.
The application front end: an SPA application connected to the API Gateway
The first (and naive) approach when thinking about how microservices could match the need of this application would be:
One monolith service for managing EVERY resources and business logic:
By managing i mean providing the REST API for all of those resources and handling the persistance of those resources within the database.
One service for each Database replica.
One service providing the internal chat room using websocket or whatever.
One service for Authentification.
One service for the api gateway.
One service serving the static assets for the SPA front end.
An other approach could be to split service 1 into as many service as business models exist in the system. (let's call those services resource services)
I wonder what are the benefit of this second approach.
In fact i see a lot of downsides with this approach:
Need to setup an inter service communication process.
When requesting a service representing resource X that have a relation with resource Y, a lot more work are needed (i.e: interservice request)
More devops work.
More difficulty to share common code between resource services.
Where to put business logic ?
When starting a fresh project this second approach seams to me a bit of an over engineered work.
I feel like starting with the first approach and THEN split the monolith resource service into several specific services depending on the observed needs will minimize the complexity and risks.
What's your opinions regarding that ?
Is there any best practices ?
Thanks a lot !
The first approach is not microservice way, by definition.
And yes, idea is to split - each service for Bounded Context - One for Users, one for Inventory, Todo things etc etc.
The idea of microservices, at very simple, assumes:
You want to pay extra dev-ops work for modularity, and complete/as much as possible removal of dependencies between different bounded contexts (see dev/product/pjm teams).
It's idea lies around ownership, modularity, allowing separate teams develop their own piece of code, without requirement from them to know the rest of the system . As long as there is Umbiqutious Language (common set of conventions/communication protocols/terminology/documentation) they can work in completeley isolated, autotonmous fashion.
Maintaining, managing, testing, and develpoing become much faster - in cost of initial dev-ops and sophisticated architecture engeneering investment.
Sharing code should be minimal, and if required, could be done to represent the Umbiqutious Language (common communication interface/set of conventions). Sharing well-documented code, which acts as integration/infrastructure mini-framework, and have special dev/dev-ops/team attached to it ccould be easy business, as long as it, as i said, well-documented, and threated as separate architecture-related sub-project.
Properly engeneered Microservice architecture could lessen maintenance and development times by huge margin, but it requires quite serious reason to use it (there lot of reasons, and lots of articles on that, I wont start it here) and quite serious engeneering investment at start.
It brings modularity, concept of ownership, de-coupling of different contexts of your app.
My personal advise check if you really need MS architecture. If you can not invest engenerring though and dev-ops effort at start and do not have proper reasons for such system - why bother?
If you do need MS, i would really advise against the first method. You will develop wrong thing's, will miss the true challenges of MS, and could end with huge refactor, which could take more work than engeneering MS system from start properly. It's like to make square to make it fit into round bucket later.
Now answering your question title: granularity. (your question body bit different from your post title).
Attach it to Domain Model / Bounded Context. You can make meaty services at start, in order to avoid complex distributed transactions.
First just answer question if you need them in your design/architecture?
If not, probably you did a good design.
Passing reference ids between models from different microservices should suffice, and if not, try to rethink if more of complex transactions could be avoided.
If your system have unavoidable amount of distributed trasnactions, perhaps look towards using/making some CQRS mini-framework as your "shared code infrastructure component" / communication protocol.
It is the key problem of the microservices or any other SOA approach. It is where the theory meets the reality. In general you should not force the microservices architecture for the sake of it. This should rather naturally come from functional decomposition (top-down) and operational, technological, dev-ops needs (bottom-up). First approach is closer to what you would need to do, however at the first step do not focus so much on the technology aspect. Ask yourself why would you need to implement a separate service for particular business function. Treat it as a micro-application with all its technical resources. Ask yourself if there is reason to implement particular function as a full-stack app.
Some, of the functionalities you have mentioned in scenario 1) are naturally ok, such as 'authentication' service - this is probably good candidate.
For the business functions decomposition into separate service, focus on the 'dependencies' problem, if there are too many dependencies and you see that you have to implement bigger chunk of data mode - naturally this is not a micro service any more.
Try to put litmus test , if you can 'turn off' particular functionality and the system still makes sense - it is the candidate for service or further decomposition

Microservice Composition Approaches

I have a question for the microservices community. I'll give an example from the educational field but it applies to every microservices architecture.
Let's say I have student-service and licensing-service with a business requirement that the number of students is limited by a license. So every time a student is created a licensing check has to be made. There are multiple types of licenses so the type of the license would have to be included in the operation.
My question is which approach have you found is better in practice:
Build a composite service that calls the 2 services
Coupling student-service to licensing-service so that when createStudent is called the student-service makes a call to licensing-service and only when that completes will the student be created
Use an event-based architecture
People talk about microservice architectures being more like a graph than a hierarchy and option 1 kinda turns this into a hierarchy where you get increasingly coarse composites. Other downsides is it creates confusion as to what service clients should actually use and there's some duplication going on because the composites API would have to include all of the parameters that are needed to call the downstream services.
It does have a big benefit because it gives you a natural place to do failure handling, choreography and handle consistency.
Option 2 seems like it has disadvantages too:
the API of licensing would have to leak into the student API so that you can specify licensing restrictions.
it puts a lot of burden on the student-service because it has to handle consistency across all of the dependent services
as more services need to react when a student is created I could see the dependency graph quickly getting out of control and the service would have to handle that complexity in addition to the one from its own logic for managing students.
Option 3 While being decoupling heaven, I don't really think would work because this is all triggered from an UI and people aren't really used to "go do something else until this new student shows up" approach.
Thank you
Option 1 and 2 creates tight coupling which should be avoided as much as possible because you would want to have your services to be independent. So the question becomes:
How do we do this with an event-based architecture?
Use events to keep track of licensing information from license service in student service, practically a data duplication. Drawbacks here are: you only have eventual consistency as the data duplication is asynchronous.
Use asynchronous events to trigger event chain which ultimately trigger a student creation. From your question, it looks like you already got the idea, but have an issue dealing with UI. You have two possible options here: wait for the student creation (or failure) event with a small amount of timeout, or (event better), make you system completely reactive (use server-client push mechanism for the UI).
Application licensing and creating students are orthogonal so option 2 doesn't make sense.
Option 1 is more sensible but I would try not to build another service. Instead I would try to "filter" calls to student service through licensing middleware.
This way you could use this middleware for other service calls (e.g. classes service) and changes in API of both licensing and students can be done independently as those things are really independent. It just happens that licensing is using number of students but this could easily change.
I'm not sure how option 3, an event-based approach can help here. It can solve other problems though.
IMHO, I would go with option 2. A couple of things to consider. If you are buying complete into SOA and furthermore microservices, you can't flinch everytime a service needs to contact another service. Get comfortable with that.... remember that's the point. What I really like about option 2 is that a successful student-service response is not sent until the license-service request succeeds. Treat the license-service as any other external service, where you might wrap the license-service in a client object that can be published by the license-service JAR.
the API of licensing would have to leak into the student API so that you can specify licensing restrictions.
Yes the license-service API will be used. You can call it leakage (someone has to use it) or encapsulation so that the client requesting the student-service need not worry about licensing.
it puts a lot of burden on the student-service because it has to handle consistency across all of the dependent services
Some service has to take on this burden. But I would manage it organically. We are talking about 1 service needing another one. If this grows and becomes concretely troublesome then a refactoring can be done. If the number of services that student-service requires grows, I think it can be elegantly refactored and maybe the student-service becomes the composite service and groups of independently used services maybe be consolidated into new services if required. But if the list of dependency services that student-service uses is only used by student-service, then I do not know if its worth grouping them off into their own service. I think instead of burden and leakage you can look at it as encapsulation and ownership.... where student-service is the owner of that burden so it need not leak to other clients/services.
as more services need to react when a student is created I could see the dependency graph quickly getting out of control and the service would have to handle that complexity in addition to the one from its own logic for managing students.
The alternative would be various composite services. Like my response for the previous bullet point, this can be tackled elegantly if it surfaces as a real problem.
If forced each of your options can be turned into viable solution. I am making an opinionated case for option 2.
I recommend option 3. You have to choose between availability and consistency - and availability is most often desired in microservices architecture.
Your 'Student' aggregate should have a 'LicenseStatus' attribute. When a student is created, its license status is set to 'Unverfied', and publishes an event 'StudentCreated'. The LicenseService should then react to this event and attempt to reserve a license for this student. It would then publish a 'Reserved' or 'Rejected' event accordingly. The student service would update the student's status by subscribing to these events.
When the UI calls your API gateway to create a student, the gateway would simply call the Student service for creation and return a 202 Accepted or 200 OK response without having to wait for the student to be properly licensed. The UI can notify the user when the student is licensed through asynchronous communication (e.g. via long-polling or web sockets).
In case the license service is down or slow, only licensing would be affected. The student service would still be available and would continue to handle requests successfully. Once the license service is healthy again, the service bus will push any pending 'StudentCreated' events from the queue (Eventual consistency).
This approach also encourages expansion. A new microservice added in the future can subscribe to these events without having to make any changes to the student or license microservices (Decoupling).
With option 1 or option 2, you do not get any of these benefits and many of your microservices would stop working due to one unhealthy microservice.
I know the question has been asked a while ago, but I think I have something to say that might be of value here.
First of all, your approach will depend on the overall size of your final product. I tend to go with a rule of thumb: if I would have too many dependencies between individual micro-services, I tend to use something that would simplify and possibly remove these dependencies. I don't want to end up with a spider-web of services! A good thing to look at here are Message queues, like RabbitMQ for example.
However, if I have just a few services that talk to each other, I will just make them call each other directly, as any alternative solutions whilst simplifying the architecture, add some computing and infrastructure overhead.
Whatever approach you will decide to go with, design your services in a Hexagonal architecture in mind! This will save you trouble when you decide to migrate from one solution to another. What I tend to do is design my DAOs as "adapters", so a DAO that calls Service A will either call it directly or via message queue, independent of the business logic. When I need to change it, I can just change this DAO for another one, without having to touch any of the business logic (at the end of the day business logic doesn't care how it gets the data). Hexagonal architecture fits really well with micro-service, TDD and black-box testing.

Web Service Implementation Changes

To what degree should web service providers limit implementation changes without creating a new service version? One view is that as long as the contract is upheld, the service owner should be free to update the implementation as needed. Schemas are not always air tight and it is foreseeable that changes within the service implementation affect the service output while still upholding the contract.
To what degree should consumers be notified of implementation changes? Its one thing to notify consumers of updates to your own web service implementation. How feasible is it to track implementation changes to all downstream dependencies? Should service owners create a new version when they know that a change may affect consumers? And try to be a good citizen and notify consumers of all other changes?
Lots of questions and I doubt there is one size fits all answer. It could just depend on the situation. Maybe this is what SLAs are for.
Good questions, and I think you've already answered it. Yes, these details would be in an SLA and I think that if the contract/WSDL is the same that why would the service need to notify its' consumers? Unless of course changes to the service impact response times and performance. Maybe the service would notify consumers when another contract is introduced (in addition to the original). Consumers become aware of any new capabilities and can adjust their clients accordingly if desired.
I'm in an environment where SLAs don't exist for internal clients, so absent an SLA, the following are some common sense guidelines
Attempt to limit number of modifications to services
Communicate service implementation releases so consumers can plan test cycles
Provide consumers with the list of direct downstream dependencies and location to find their schedules and release notes
Consider a new version if an implementation change will semantically affect consumer
A lot depends on your specific circumstances. Speaking generally, here are a few top considerations.
The service contract and schema are all that a service and client share in common. A service implementation change that does not change the contract or schema (e.g., fixing a bug in the implementation logic) should not necessitate notifying the clients, nor should it be considered a new version.
OTOH, if you have a poorly constructed, overly-loose contract, such as passing all of the data as one big string, where the client had to do extensive interpretation to consume the service, and now you're looking to exploit that overly-loose contract in a way that would likely break the client, you owe it to all parties to change the contract (and improve it!) and publish that as a new version of the service.
Since services are often used to enable loose coupling between services, it is sometimes not practical or even possible to identify all of the clients of a service. Producing a new version of a service in these situations often entails maintaining multiple versions of a service for some period of time, often as directed by some governance body.
Providing details about service implementations, implementation dependencies, etc., encourages creating tight coupling by disclosing non-contract related details that the client may then take a dependency on. That can limit the ability of the service to change independently of the client.
The book Web Service Contract Design and Versioning for SOA
by Thomas Erl is a good resource on the topic, and details several common scenarios.