Consider the following micro services for an online store project:
Users Service keeps account data about the store's users (including first name, last name, email address, etc')
Purchase Service keeps track of details about user's purchases.
Each service provides a UI for viewing and managing it's relevant entities.
The Purchase Service index page lists purchases. Each purchase item should have the following fields:
id, full name of purchasing user, purchased item title and price.
Furthermore, as part of the index page, I'd like to have a search box to let the store manager search purchases by purchasing user name.
It is not clear to me how to get back data which the Purchase Service does not hold - for example: a user's full name.
The problem gets worse when trying to do more complicated things like search purchases by purchasing user name.
I figured that I can obviously solve this by syncing users between the two services by broadcasting some sort of event on user creation (and saving only the relevant user properties on the Purchase Service end). That's far from ideal in my perspective. How do you deal with this when you have millions of users? would you create millions of records in each service which consumes users data?
Another obvious option is exposing an API at the Users Service end which brings back user details based on given ids. That means that every page load in the Purchase Service, I'll have to make a call to the Users Service in order to get the right user names. Not ideal, but I can live with it.
What about implementing a purchase search based on user name? Well I can always expose another API endpoint at the Users Service end which receives the query term, perform a text search over user names in the Users Service, and then return all user details which match the criteria. At the Purchase Service, map the relevant ids back to the right names and show them in the page. This approach is not ideal either.
Am I missing something? Is there another approach for implementing the above? Maybe the fact that I'm facing this issue is sort of a code smell? would love to hear other solutions.
This seems to be a very common and central question when moving into microservices. I wish there was a good answer for that :-)
About the suggested pattern already mentioned here, I would use the term Data Denormalization rather than Polyglot Persistence, as it doesn't necessarily needs to be in different persistence technologies. The point is that each service handles its own data. And yes, you have data duplication and you usually need some kind of event bus to share data across services.
There's another option, which is a sort of a take on the first - making the search itself as a separate service.
So in your example, you have the User service for managing users. The Purchases services manages purchases. Each handles its own data and only the data it needs (so, for instance, the Purchases service doesn't really need the user name, only the ID). And you have a third service - the Search Service - that consumes data produced by other services, and creates a search "view" from the combined data.
It's totally fine to keep appropriate data in different databases, it's called Polyglot Persistence. Yes, you would like to keep user data and data about purchases separately and use message queue for sync. Millions of users seems fine to me, it's scalability, not design issue ;-)
In case of search - you probably want to search more than just username, right? So, if you use message queue to update data between services you can also easily route this data to ElasticSearch, for example. And from ElasticSearch perspective it doesn't really matter what field to index - username or product title.
I usually use both approaches. Sometimes i have another service which is sitting on top on x other services and combines the data. I don't really like this approach because it is causing dependencies and coupling between services. So in general, within my last projects we tried to stick to polyglot persistence.
Also think about, if you need to have x sub http requests for combining data in some kind of middleware service, it will lead you to higher latency. We always try to cut down the amount of requests for one task and handle everything what is possible through asynchronous queues. ( especially data sync )
If you conceptualize modules as the owners and controllers of the data they work on, then your model must also communicate that data out of that module to others. In contrast, the modules in a manufacturing process have the access to change data without possessing and controlling it.
Microservices is an architecture for distributed processing, like most code, where modules pass the data around to work on it. From classic articles by Harvard Business Review and McKinsey on the subject of owning members of a supply chain, I identified complexities arising from this model and wrote an article teaching programmers what you need to know: http://www.powersemantics.com/p.html
Manufacturing is an architecture for integrated processing, where modules work on the data without passing it around from point to point. This can be accomplished by having modules configured to access the same memory, files or database tables. My architecture shows how to accomplish this on memory via reference properties.
When you consider "exposing an API at the Users Service end which brings back user details based on given ids", you need to be aware that creates what HBR calls "irreversible" complexity, which I've dubbed centralization complexity. Don't build A->B (distributed) systems, because you can't decentralize them later after failing to separate requirements. Requirements in production processes represent user instructions, and centralized modules only enable you to change the wrong users' processes. In other words, centralized modules don't document user groups or distinguish them from derived-product-users.
Related
Our shop has recently started taking on an SOA approach to application development. We are seeing some great benefits with the separation of concerns, reusability, and other benefits of SOA/microservices.
However, one big item we're stuck on is aggregating, filtering, and paginating results across services. Let me describe the issue with a scenario.
Say we have 3 services:
PersonService - Stores information on people (names, addresses, etc)
ItemService - Stores information on items that are purchasable.
PaymentService - Stores information regarding payments that people have made for different items.
Now, say we want to build a reporting/admin tool that can display / report on multiple services in aggregate. For instance, we want to display a paginated list of Payments, along with the Person and Item that each payment was for. This is pretty straightforward: Grab the list of payments, then query PersonService and ItemService for the respective Person and Item records.
However, the issue comes into play when we want to then filter down that data: For instance, displaying a paginated list of payments made by people with the first name 'Bob', who have purchased the item 'Car'. This makes things much more complicated, because we need to filter results from 3 different services without knowing how many results each service is going to return.
From a performance perspective, querying all of the services over and over again to narrow down the results would be costly, so I've been researching better solutions. However, I cannot find concrete solutions to this problem (or at least a "best practice"). In a monolithic application, we'd simply use SQL joins across the different tables. I'm having a ton of trouble figuring out how/if something similar is possible across services.
My question to the community is: What would your approach be? Things I've considered:
Using some sort of search index (Elasticsearch, Solr) that contains all data for all services (updated via events pushed out by services), and then querying the search index for results.
Attempting to understand how projects like GraphQL and Neo4j may assist us with these issues.
I stick with Sam Newman who says in Chapter 4 "The shared Database" of his book something like:
Remember when we talked about the core principles behind good microservices? Strong cohesion and loose coupling --with database integration, we lose both things. Database integration makes it very easy for services to share data, but does nothing about sharing behaviour. Our internal representation is exposed over the wire to our consumers, and it can be very difficult to avoid making breaking changes, wich inevitably leads to fear of any changes at all. Avoid at (nearly) all costs.
This is the point I make when I curse at Content-Management-Systems.
In my view a microservice is autonomous, what it cannot be if it shares things or consumes shared things. The only exception I make here are Domain-Objects, those represent the shared understanding of the business model and must be used in communication between microservices solely.
It depends on the microservice itself if an ER or AggregationOriented database (divided into document based or graph based) better suits the needs.
The funny thing is, by being loosley coupled and by being autonomus you are able to do just that!
If an PaymentService shares the behaviour of "how many payments for Person A"
He needs to know Person A in order to fullfill this. But Everything he knows about Person A must origin from the PersonService, maybe at runtime (the PaymentService maybe just stores an id) or event based (the PaymentService stores the data it needs up to the Domain-Object user, what gets updated triggered and supplied by the PersonService). The PaymentService itself does not share users itself.
The answer to this question is that you need a separate Read Database or Materialized View that aggregates data from multiple databases, and makes it ready for fast retrieval. See the CQRS pattern: https://learn.microsoft.com/en-us/azure/architecture/patterns/cqrs
The data in the Materialized View might not be "the most up to date", meaning there might be a small delay between when the change is made by the respective microservice, and when time the "Materialized View" is updated, but this is fine, as retrieving the data fast is more important than if the data is stale for a few seconds or even minutes (there are systems where the Materialized View can take 2-5 minutes to be updated, and yet that might still be acceptable)
The best pattern to implement this Read Database or Materialized View from CQRS, is typically the Event Sourcing pattern, where we can listen to a queue for new updates and update the Read Database immediately. See the Event Sourcing pattern: https://learn.microsoft.com/en-us/azure/architecture/patterns/event-sourcing
Storing this data in elasticsearch/solr/cognitivesearch type service in addition to SQL could help solve some of these problems.
In your given example,
In the search index(elasticsearch/solr/cognitivesearch) person object will have a property called "items" that will contain a list of items that are paid for by that person.
That way, you can filter across objects, get a paginated list that is sorted by any property of the person. You can add similar information on other documents to better suit your business needs.
Using a GraphDatabase would seem to solve your problem from a 10000ft, but you will run into pagination problems when you operate at scale. GraphDatabases do not do pagination well(they will have to visit all the nodes anyway, even when you need a paginated list) and will cause timeouts/performance issues.
You can use replication tables.
All databases have replication feature
If you have personService that has person table and PaymentService that has payment table then create reportService that has person and payment tables, that they filled by replication feature.
Background
I have a backoffice that manages information from various sources. Part of the information is in a database that the backoffice can access directly, and part of it is managed by accessing web services. Such services usually provides CRUD operations plus paged searches.
There is an access control system that determines what actions a user is allowed to perform. The decision of whether the user can perform some action is defined by authorization rules that depend on the underlying data model. E.g. there is a rule that allows a user to edit a resource if she is the owner of that resource, where the owner is a column in the resources table. There are other rules such as "a user can edit a resource if that resource belongs to an organization and the user is a member of that organization".
This approach works well when the domain model is directly available to the access control system. Its main advantage is that it avoids replicating information that is already present in the domain model.
When the data to be manipulated comes from a Web service, this approach starts causing problems. I can see various approaches that I will discuss below.
Implementing the access control in the service
This approach seems natural, because otherwise someone could bypass access control by calling the service directly. The problem is that the backoffice has no way to know what actions are available to the user on a particular entity. Because of that, it is not possible to disable options that are unavailable to the user, such as an "edit" button.
One could add additional operations to the service to retrieve the authorized actions on a particular entity, but it seems that we would be handling multiple responsibilities to the service.
Implementing the access control in the backoffice
Assuming that the service trusts the backoffice application, one could decide to implement the access control in the backoffice. This seems to solve the issue of knowing which actions are available to the user. The main issue with this approach is that it is no longer possible to perform paged searches because the service will now return every entity that matches, instead of entities that match and that the user is also authorized to see.
Implementing a centralized access control service
If access control was centralized in a single service, everybody would be able to use it to consult access rights on specific entities. However, we would lose the ability to use the domain model to implement the access control rules. There is also a performance issue with this approach, because in order to return lists of search results that contain only the authorized results, there is no way to filter the database query with the access control rules. One has to perform the filtering in memory after retrieving all of the search results.
Conclusion
I am now stuck because none of the above solutions is satisfactory. What other approaches can be used to solve this problem? Are there ways to work around the limitations of the approaches I proposed?
One could add additional operations to the service to retrieve the
authorized actions on a particular entity, but it seems that we would
be handling multiple responsibilities to the service.
Not really. Return a flags field/property from the web service for each record/object that can then be used to pretty up the UI based on what the user can do. The flags are based off the same information that is used for access control that the service is accessing anyway. This also makes the service able to support a browser based AJAX access method and skip the backoffice part in the future for added flexibility.
Distinguish between the components of your access control system and implement each where it makes sense.
Access to specific search results in a list should be implemented by the service that reads the results, and the user interface never needs to know about the results the user doesn't have access to. If the user may or may not edit or interact in other ways with data the user is allowed to see, the service should return that data with flags indicating what the user may do, and the user interface should reflect those flags. Service implementing those interactions should not trust the user interface, it should validate the user has access when the service is called. You may have to implement the access control logic in multiple database queries.
Access to general functionality the user may or may not have access to independant of data should again be controlled by the service implementing that functionality. That service should compute access through a module that is also exposed as a service so that the UI can respect the access rules and not try to call services the user does not have access to.
I understand my response is very late - 3 years late. It's worth shedding some new light on an age-old problem. Back in 2011, access-control was not as mature as it is today. In particular, there is a new model, abac along with a standard implementation, xacml which make centralized authorization possible.
In the OP's question, the OP writes the following re centralized access control:
Implementing a centralized access control service
If access control was centralized in a single service, everybody would be able to use it to consult access rights on specific entities. However, we would lose the ability to use the domain model to implement the access control rules. There is also a performance issue with this approach, because in order to return lists of search results that contain only the authorized results, there is no way to filter the database query with the access control rules. One has to perform the filtering in memory after retrieving all of the search results.
The drawbacks that the OP mentions may have been true in a home-grown access control system, in RBAC, or in ACL. But they are no longer true in abac and xacml. Let's take them one by one.
The ability to use the domain model to implement the access control rules
With attribute-based access control (abac) and the eXtensible Access Control Markup Language (xacml), it is possible to use the domain model and its properties (or attributes) to write access control policies. For instance, if the use case is that of a doctor wishing to view medical records, the domain model would define the Doctor entity with its properties (location, unit, and so on) as well as the Medical Record entity. A rule in XACML could look as follows:
A user with the role==doctor can do the action==view on an object of type==medical record if and only if the doctor.location==medicalRecord.location.
A user with the role==doctor can do the action==edit on an object of type==medical record if and only if the doctor.id==medicalRecord.assignedDoctor.id
One of the key benefits of XACML is precisely to mirror closely the business logic and the domain model of your applications.
Performance issue - the ability to filter items from a db
In the past, it was indeed impossible to create filter expressions. This meant that, as the OP points out, one would have to retrieve all the data first and then filter the data. That would be an expensive task. Now, with XACML, it is possible to achieve reverse querying. The ability to run a reverse query is to create a question of the type "What medical record can Alice view?" instead of the traditional binary question "Can Alice view medical records #123?".
The response of a reverse query is a filter condition which can be converted into a SQL statement, for instance in this scenario SELECT id FROM medicalRecords WHERE location=Chicago assuming of course that the doctor is based in Chicago.
What does the architecture look like?
One of the key benefits of a centralized access control service (also known as externalized authorization) is that you can apply the same consistent authorization logic to your presentation tier, business tier, APIs, web services, and even databases.
In my case the separate system is a web-service (but it could conceivably be anything).
My question is what are the best practices when you integrate against a separate system such as a web-service when it comes to data?
Example: Web-service provides a list of products. Products are grouped using categories. You can get all products in a sub-category. You can get a specific product by its id (an integer) or its name (a unique value).
In my application:
I display the list of categories and products - and the user can choose the product and specify an order quantity.
Should I store the name of the category or the id of the category?
Should I store the name of the product or the id of the product?
How should I name the field in the database that stores the data from the web-service
(CategoryId or WsCategoryId: so that by convention one knows where the value is coming from?)
Any other best practices?
Any other references?
From your question I understand that the web service's interface looks something like this:
/product/
/product/{ProductId}
/product/{ProductName}
/product/category/{CategoryId}
Since you are asking if you should store CategoryName, I assume that it is unique (same as ProductName).
I also assume that the web service handles cases where products or categories are renamed transparently (i.e. by providing a redirect or any other means which allow you to detect this and handle it accordingly). If it doesn't, do not consider storing names as references to products or categories - always use IDs.
I would provide the same answer to your questions #1 and #2. Even though uniqueness of ProductName and CategoryName will technically allow you to store them in your application as unique identifiers of products and categories, I would opt for storing their IDs instead. The main decision point would be your storage medium. Since you are using a database, and the web service allows you to access objects by unique numerical IDs, database normalization rules should apply - hence you should store IDs.
The above however assumes that you are using a relational database - if you are using a NoSQL database, I assume that storing names instead of IDs would be a viable option as well (at least as far as I can tell with my current understanding of NoSQL solutions, unfortunately I don't have any practical experience with any of them yet).
Regarding question #3 - I would stick with the naming conventions that you already use in your database. There are many different conventions for naming tables and columns out there, so I really doubt that there are any standardized conventions on how to name columns referencing web service objects. I would name them according to your existing naming conventions and in a way that purpose of the columns is clear to everybody who is using the system. Note that if there is a chance that you will be using other web services in the future, you should consider keeping the name of the service in the column name rather than using a generic ws prefix - e.g. AmazonProductId or AmazonCategoryId.
I'll try to point out a few items from my experience, but I would not label them as best practices - just topics to think about.
In my experience, I found it useful to treat data from web services in the same fashion as the data from a database - at least from an application's perspective, where your storage layer would be abstracted from application logic. By this I mean that you would should think about and prepare for similar scenarios regardless if your storage medium is a database or a web service. Same as databases, web services can go down, both can have their data or integrity corrupt, both will require you to sanitize or otherwise process data on input.
Caching of data should be an item which is high on your list - apart from the obvious performance reasons, it can allow you to deal with outages of the web service (to an extend limited by which data you cache).
An example would be that your application displays a list products most frequently purchased products in your application. If your application stores only IDs of products, you will have to do one or more requests to the web service in order to retrieve the names of all products which you need to display in the list. If you cache product names locally or in your database, you will achieve better performance, conserve your resources and you will also have a failsafe scenario in case that the web service goes down.
Referential integrity is one other important aspect to think about when working with web services. As the web service is completely separate from your database, you do not have the option to create foreign keys as you would do in a database-only solution. This means that data changes in the web service (i.e. product updates or deletions) can break the integrity of data in your database.
Regarding references, these depend mostly on the type of web service that you are about to use (you didn't specify which service you will be using). If the service is based on REST principles, I can recommend Restful Web Services by Leonard Richardson and Sam Ruby. Even though it isn't focused on application/service integration as such, it's a great introduction into REST.
I've been tasked with setting up a society's website. I'm a full time Django (at al) web developer so I was happy to take on the task.
Going through the specs, they want to control memberships so that all applications need a "second" (read: sponsor, referee, etc) and then they need to pay a subscription fee to be part of the club.
This club has a number of events with variable ticket prices for lunches and talks to name two. Only members are allowed to see the price per ticket and therefore only members are allowed to buy the tickets.
I had originally planned on farming the event management off to EventBrite and pulling the upcoming events back to the website through EB's API but this members-only constraint looks like something EventBrite can't do.
Then there's processing members subscriptions. I had hoped to allow anybody to register a django.contrib.auth account but leave subscription payment offline but the client would be happier if they could mark accounts as "members", store the subscription data in the database and let the members pay online.
Like with EventBrite, I was hoping I could store rough membership data (whether or not they're allowed to subscribe, a unique token for the user on the API service, their level of membership and their membership's expiry) and there'd be something I could post users off to to process their subscription payment.
I basically don't want to touch any payment systems. Even something as simple as Paypal+IPN is something I'd rather not do (I can and have in the past on other projects) but it's the layer of management that I'd have to build around it (messaging members, creating recurring events, etc) that I'd like to farm out to a third party... Even if they do want an additional percent of the payments processed.
Do any of you know any suitable APIs that cover membership or events or both?
Or is this so complex that I should give up hoping for external help and just knuckle down and do it myself?
I think the google search you are looking for is online membership management. I don't know if any of them play particularly nicely with Django/python, but some of them do include APIs. Almost all of these are companies that charge, either for the system, or on a per-user basis.
If you don't mind installing something yourself, CiviCRM is a free, open source solution that I found with a bit of googling. It's integrates with either Joomla or Drupal (so probably PHP-based). You'd have to put the payment processing in yourself, but it does support payments using PayPal which would take handling payments mostly out of the equation. If you can, choose PayPal Express rather than PayPal Website Payments Pro since you may need to be PCI-DSS compliant to use the latter.
I am developing a Django web application with a suite of steel design tools for structural engineers. There will be a database table of inputs for each design tool, and each row of each table will correspond to a particular design condition to be "solved." The users may work solely or in groups. Each user needs to have ongoing access to his own work so that designs can be refined, copied and adapted, and so that reports can be created whenever convenient, usually at the end of a project when hard copy documentation will be needed. The database contents must then be available over any number of sessions occurring over periods measured in months or even years for a given design project.
When there is a group of users, typically all associated with a given design office, it will probably be acceptable for them all to have joint and mutual access to each other's work. The application supports routine engineering production activities, not innovative intellectual property work, and in-house privacy is not the norm in the industry anyway. However, the work absolutely must be shielded from prying eyes outside of the group. Ideally, each group would have one or more superusers authorized to police the membership of the group. Probably the main tool they would need would be the ability to remove a member from the group, discontinuing his access privileges. This would be a user group superuser and would not be the same as a superuser on the site side.
For convenient access, each row of each database table will be associated with a project number/project name pair that will be unique for a given company deploying a user or user group. A different company could easily choose to use a duplicate project number, and even could choose a duplicate project name, so discriminating exactly which database rows belong to a given user (or group) will probably have to be tracked in a separate related "ownership list" table for each user (or group).
It is anticipated (hoped) that, eventually, several hundred users (or user groups) associated with different (and often competing) companies will solve tens of thousands of design conditions for thousands of projects using these tools.
So, here are my questions:
First, is there any point in trying to salvage much of anything from the Django contrib.auth code? As I perceive it, contrib.auth is designed for authentication and access control that is suitable for the blogosphere and web journalism, but that doesn't support fine-grained control of access to "content."
Second, is there any available template, pattern, example, strategy or design advice I could apply to this problem?
django-authority: Documentation, code on GitHub