Access control in Datomic - clojure

When writing an application based on Datomic and Clojure, it seems the peers have unrestricted access to the data. How do I build a multi-user system where user A cannot access data that is private to user B?
I know I can write the queries in Clojure such that only user A's private data is returned... but what prevents a malicious user from hacking the binaries to see user B's private data?
UPDATE
It seems the state of Clojure/Datomic applications is in fact lacking in
security based on the answer from #Thumbnail and the link to John P Hackworth's blog.
Let me state more clearly the problem I see, because I don't see any solution to this and it is the original problem that prompted this question.
Datomic has a data store, a transactor, and peers. The peers are on the user's computer and run the queries against data from the data store. My question is:
how to restrict access to data in the data store. Since the data store is dumb
and in fact just stores the data, I'm not sure how to provide access controls.
When AWS S3 is used as a data store the client (a peer) has to authenticate
before accessing S3, but once it is authenticated doesn't the peer have access
to all the data!? Limitted only be the queries that it runs, if the user wants
to get another user's data they can change the code, binary, in the client so
that the queries run with a different user name, right? To be clear... isn't
the access control just a condition on the query? Or are there user specific
connections that the data store recognizes and the data store limits what data
is visible?
What am I missing?
In a traditional web framework like Rails, the server side code restricts all
access to data and authenticates and authorizes the user. The user can change
the URLs or client side code but the server will not allow access to data unless
the user has provided the correct credentials.
Since the data store in Datomic is dumb, it seems it lacks the ability to
restrict access on a per user basis and the application (peer) must do this. I
do not want to trust the user to behave and not try to acquire other users'
information.
A simple example is a banking system. Of course the user will be
authenticated... but after that, what prevents them from modifying the client
side code/binary to change the data queries to get other users' account
information from the data store?
UPDATE - MODELS
Here are two possible models that I have of how Datomic and Clojure work... the first one is my current model (in my head).
user's computer runs client/peer that has the queries and complete access to the data store where user was authenticated before the client started thereby restricting users to those that we have credentials for.
user's computer has an interface (webapp) that interacts with a peer that resides on a server. The queries are on the server and cannot be modified by the user, thereby access controls are under access control themselves by the security of the server running the peer.
If the second model is the correct one, then my question is answered: the user cannot modify the server code and the server code contains the access controls... therefore, the "peers" which I thought resided on the user's computer actually reside on the application server.

Your second model is the correct one. Datomic is designed so that peers, transactor and storage all run within a trusted network boundary in software and on hardware you control. Your application servers run the peer library, and users interact with your application servers via some protocol like HTTP. Within your application, you should provide some level of user authentication and authorization. This is consistent with the security model of most applications written in frameworks like Rails (i.e. the end user doesn't require database permissions, but rather application permissions).
Datomic provides a number of very powerful abstractions to help you write your application-level auth(n|z) code. In particular, since transactions are first-class entities, Datomic provides the ability to annotate your transactions at write-time (http://docs.datomic.com/transactions.html) with arbitrary facts (e.g. the username of the person responsible for a given transaction, a set of groups, etc.). At read-time, you can filter a database value (http://docs.datomic.com/clojure/index.html#datomic.api/filter) so that only facts matching a given predicate will be returned from queries and other read operations on that database value. This allows you to keep authz concerns out of your query logic, and to layer your security in consistently.

As I understand it ... and that's far from completely ... please correct me if I'm wrong ...
The distinguishing feature of Datomic is that the query engine, or large parts of it, reside in the database client, not in the database server. Thus, as you surmise, any 'user' obtaining programmatic access to a database client can do what they like with any of the contents of the database.
On the other hand, the account system in the likes of Oracle constrains client access, so that a malicious user can only, so to speak, destroy their own data.
However, ...
Your application (the database client) need not (and b****y well better not!) provide open access to any client user. You need to authenticate and authorize your users. You can show the client user the code, but provided your application is secure, no malicious use can be made of this knowledge.
A further consideration is that Datomic can sit in front of a SQL database, to which constrained access can be constructed.
A web search turned up Chas. Emerick's Friend library for user authentication and authorization in Clojure. It also found John P Hackworth's level-headed assessment that Clojure web security is worse than you think.

You can use transaction functions to enforce access restrictions for your peers/users. You can put data that describes your policies into the db and use the transaction function(s) to enforce them. This moves the mechanism and policy into the transactor. Transactions that do not meet the criteria can either fail or simply result in no data being transacted.
Obviously you'll need some way to protect the policy data and transaction functions themselves.

Related

Microservices Architecture: Cross Service data sharing

Consider the following micro services for an online store project:
Users Service keeps account data about the store's users (including first name, last name, email address, etc')
Purchase Service keeps track of details about user's purchases.
Each service provides a UI for viewing and managing it's relevant entities.
The Purchase Service index page lists purchases. Each purchase item should have the following fields:
id, full name of purchasing user, purchased item title and price.
Furthermore, as part of the index page, I'd like to have a search box to let the store manager search purchases by purchasing user name.
It is not clear to me how to get back data which the Purchase Service does not hold - for example: a user's full name.
The problem gets worse when trying to do more complicated things like search purchases by purchasing user name.
I figured that I can obviously solve this by syncing users between the two services by broadcasting some sort of event on user creation (and saving only the relevant user properties on the Purchase Service end). That's far from ideal in my perspective. How do you deal with this when you have millions of users? would you create millions of records in each service which consumes users data?
Another obvious option is exposing an API at the Users Service end which brings back user details based on given ids. That means that every page load in the Purchase Service, I'll have to make a call to the Users Service in order to get the right user names. Not ideal, but I can live with it.
What about implementing a purchase search based on user name? Well I can always expose another API endpoint at the Users Service end which receives the query term, perform a text search over user names in the Users Service, and then return all user details which match the criteria. At the Purchase Service, map the relevant ids back to the right names and show them in the page. This approach is not ideal either.
Am I missing something? Is there another approach for implementing the above? Maybe the fact that I'm facing this issue is sort of a code smell? would love to hear other solutions.
This seems to be a very common and central question when moving into microservices. I wish there was a good answer for that :-)
About the suggested pattern already mentioned here, I would use the term Data Denormalization rather than Polyglot Persistence, as it doesn't necessarily needs to be in different persistence technologies. The point is that each service handles its own data. And yes, you have data duplication and you usually need some kind of event bus to share data across services.
There's another option, which is a sort of a take on the first - making the search itself as a separate service.
So in your example, you have the User service for managing users. The Purchases services manages purchases. Each handles its own data and only the data it needs (so, for instance, the Purchases service doesn't really need the user name, only the ID). And you have a third service - the Search Service - that consumes data produced by other services, and creates a search "view" from the combined data.
It's totally fine to keep appropriate data in different databases, it's called Polyglot Persistence. Yes, you would like to keep user data and data about purchases separately and use message queue for sync. Millions of users seems fine to me, it's scalability, not design issue ;-)
In case of search - you probably want to search more than just username, right? So, if you use message queue to update data between services you can also easily route this data to ElasticSearch, for example. And from ElasticSearch perspective it doesn't really matter what field to index - username or product title.
I usually use both approaches. Sometimes i have another service which is sitting on top on x other services and combines the data. I don't really like this approach because it is causing dependencies and coupling between services. So in general, within my last projects we tried to stick to polyglot persistence.
Also think about, if you need to have x sub http requests for combining data in some kind of middleware service, it will lead you to higher latency. We always try to cut down the amount of requests for one task and handle everything what is possible through asynchronous queues. ( especially data sync )
If you conceptualize modules as the owners and controllers of the data they work on, then your model must also communicate that data out of that module to others. In contrast, the modules in a manufacturing process have the access to change data without possessing and controlling it.
Microservices is an architecture for distributed processing, like most code, where modules pass the data around to work on it. From classic articles by Harvard Business Review and McKinsey on the subject of owning members of a supply chain, I identified complexities arising from this model and wrote an article teaching programmers what you need to know: http://www.powersemantics.com/p.html
Manufacturing is an architecture for integrated processing, where modules work on the data without passing it around from point to point. This can be accomplished by having modules configured to access the same memory, files or database tables. My architecture shows how to accomplish this on memory via reference properties.
When you consider "exposing an API at the Users Service end which brings back user details based on given ids", you need to be aware that creates what HBR calls "irreversible" complexity, which I've dubbed centralization complexity. Don't build A->B (distributed) systems, because you can't decentralize them later after failing to separate requirements. Requirements in production processes represent user instructions, and centralized modules only enable you to change the wrong users' processes. In other words, centralized modules don't document user groups or distinguish them from derived-product-users.

multi user desktop application with privilege separaion

I am writing a C++ application with a postgresql 9.2 database backend. It is an accounting software. It is a muti user application with privilege separation features.
I need help in implementing the user account system. The privileges for users need not be mutually exclusive. Should I implement it at the application level, or at the database level?
The company is not very large at present. Assume about 15-20 offices with an average of 10 program users per office.
Can I make use of the roles in postgres to implement this? Will it become too tedious, unmanageable or are there some flaws in such an approach?
If I go via the application route, how do I store the set of privileges a user has? Will a binary string suffice? What if there are additional privileges later, how can I incorporate them? What do I need to do to ensure that there are no security issues? And in such an approach I am assuming the application connects with the privileges required for the most privileged user.
Some combination of the two methods? Or something entirely different?
All suggestions and arguments are welcome.
Never provide authorization from a client application, which is run on uncontrolled environment. And every device, that a user has physical access to, is an uncontrolled environment. This is security through obscurity — a user can simply use a debugger to get a database access credentials from client program memory and just use psql to do anything.
Use roles.
When I was developing an C++/PostgreSQL desktop application I've chosen to disallow all users access to modify all tables and I've created an API using Pl/PgSQL functions with VOLATILE SECURITY DEFINER options. But I think it wasn't a best approach, as it's not natural and error prone to use for example:
select add_person(?,?,?,?,?,?,?,?,?,?,?,?);
I think a better way would be to allow modifications to tables which a user needs to modify and, when needed, enforce authorization using BEFORE triggers, which would throw an error when current_user does not belong to a proper role.
But remember to use set search_path=... option in all functions that have anything to do with security.
If you want to authorize read-only access to some tables then it gets even more complicated. Either you'd need to disable select privilege for these tables and create API using security definer functions for accessing all data. This would be a monster size API, extremely ugly and extremely fragile. Or you'd need to disable select privilege for these tables and create views for them using create view with (security_barrier). Also not pretty.

How to enforce authorization policies across multiple applications?

Background
I have a backoffice that manages information from various sources. Part of the information is in a database that the backoffice can access directly, and part of it is managed by accessing web services. Such services usually provides CRUD operations plus paged searches.
There is an access control system that determines what actions a user is allowed to perform. The decision of whether the user can perform some action is defined by authorization rules that depend on the underlying data model. E.g. there is a rule that allows a user to edit a resource if she is the owner of that resource, where the owner is a column in the resources table. There are other rules such as "a user can edit a resource if that resource belongs to an organization and the user is a member of that organization".
This approach works well when the domain model is directly available to the access control system. Its main advantage is that it avoids replicating information that is already present in the domain model.
When the data to be manipulated comes from a Web service, this approach starts causing problems. I can see various approaches that I will discuss below.
Implementing the access control in the service
This approach seems natural, because otherwise someone could bypass access control by calling the service directly. The problem is that the backoffice has no way to know what actions are available to the user on a particular entity. Because of that, it is not possible to disable options that are unavailable to the user, such as an "edit" button.
One could add additional operations to the service to retrieve the authorized actions on a particular entity, but it seems that we would be handling multiple responsibilities to the service.
Implementing the access control in the backoffice
Assuming that the service trusts the backoffice application, one could decide to implement the access control in the backoffice. This seems to solve the issue of knowing which actions are available to the user. The main issue with this approach is that it is no longer possible to perform paged searches because the service will now return every entity that matches, instead of entities that match and that the user is also authorized to see.
Implementing a centralized access control service
If access control was centralized in a single service, everybody would be able to use it to consult access rights on specific entities. However, we would lose the ability to use the domain model to implement the access control rules. There is also a performance issue with this approach, because in order to return lists of search results that contain only the authorized results, there is no way to filter the database query with the access control rules. One has to perform the filtering in memory after retrieving all of the search results.
Conclusion
I am now stuck because none of the above solutions is satisfactory. What other approaches can be used to solve this problem? Are there ways to work around the limitations of the approaches I proposed?
One could add additional operations to the service to retrieve the
authorized actions on a particular entity, but it seems that we would
be handling multiple responsibilities to the service.
Not really. Return a flags field/property from the web service for each record/object that can then be used to pretty up the UI based on what the user can do. The flags are based off the same information that is used for access control that the service is accessing anyway. This also makes the service able to support a browser based AJAX access method and skip the backoffice part in the future for added flexibility.
Distinguish between the components of your access control system and implement each where it makes sense.
Access to specific search results in a list should be implemented by the service that reads the results, and the user interface never needs to know about the results the user doesn't have access to. If the user may or may not edit or interact in other ways with data the user is allowed to see, the service should return that data with flags indicating what the user may do, and the user interface should reflect those flags. Service implementing those interactions should not trust the user interface, it should validate the user has access when the service is called. You may have to implement the access control logic in multiple database queries.
Access to general functionality the user may or may not have access to independant of data should again be controlled by the service implementing that functionality. That service should compute access through a module that is also exposed as a service so that the UI can respect the access rules and not try to call services the user does not have access to.
I understand my response is very late - 3 years late. It's worth shedding some new light on an age-old problem. Back in 2011, access-control was not as mature as it is today. In particular, there is a new model, abac along with a standard implementation, xacml which make centralized authorization possible.
In the OP's question, the OP writes the following re centralized access control:
Implementing a centralized access control service
If access control was centralized in a single service, everybody would be able to use it to consult access rights on specific entities. However, we would lose the ability to use the domain model to implement the access control rules. There is also a performance issue with this approach, because in order to return lists of search results that contain only the authorized results, there is no way to filter the database query with the access control rules. One has to perform the filtering in memory after retrieving all of the search results.
The drawbacks that the OP mentions may have been true in a home-grown access control system, in RBAC, or in ACL. But they are no longer true in abac and xacml. Let's take them one by one.
The ability to use the domain model to implement the access control rules
With attribute-based access control (abac) and the eXtensible Access Control Markup Language (xacml), it is possible to use the domain model and its properties (or attributes) to write access control policies. For instance, if the use case is that of a doctor wishing to view medical records, the domain model would define the Doctor entity with its properties (location, unit, and so on) as well as the Medical Record entity. A rule in XACML could look as follows:
A user with the role==doctor can do the action==view on an object of type==medical record if and only if the doctor.location==medicalRecord.location.
A user with the role==doctor can do the action==edit on an object of type==medical record if and only if the doctor.id==medicalRecord.assignedDoctor.id
One of the key benefits of XACML is precisely to mirror closely the business logic and the domain model of your applications.
Performance issue - the ability to filter items from a db
In the past, it was indeed impossible to create filter expressions. This meant that, as the OP points out, one would have to retrieve all the data first and then filter the data. That would be an expensive task. Now, with XACML, it is possible to achieve reverse querying. The ability to run a reverse query is to create a question of the type "What medical record can Alice view?" instead of the traditional binary question "Can Alice view medical records #123?".
The response of a reverse query is a filter condition which can be converted into a SQL statement, for instance in this scenario SELECT id FROM medicalRecords WHERE location=Chicago assuming of course that the doctor is based in Chicago.
What does the architecture look like?
One of the key benefits of a centralized access control service (also known as externalized authorization) is that you can apply the same consistent authorization logic to your presentation tier, business tier, APIs, web services, and even databases.

role-based methods for one web service?

I am trying to set up a (for now) really simple web service. By simple, I mean it only has a small amount of actual work to do on the code-side. It only really has one method/function: the client sends a user login, and the service responds with an otherwise very secure detail about the user (for the purposes of this question, let's say the user's birthday).
I have a lot of questions, but for now I'm wondering:
I am considering having two versions of this method. In version one, the client can only make a generic request with no variable information. The service will respond with birthday of whoever is authenticated in the client's session. In version two, the client is allowed to query any user name (so really, anything they want) and get back either the birthday or "Nothing found", etc.
The application of offering both would be so that most developers would get the birthdate of the current user so that it can be applied to that session. To extend my example: A user logs in, the developer wants to be able to have "Happy Birthday" if it is applicable. The owners of the service/data don't want the developer's client to have any access, real or conceptual, to anything about the user, even their log in, they just want to accommodate the developer's goal, as it is really nice. The developer doesn't want to be responsible for potentially having access to anything, he just wants to be nice.
Version two is available for some user-support groups. They actually need to look up birthdates of users who call in so that they can confirm that the user's are old enough to, let's say, rent a car. They may even have to look up multiple user's to see who is most eligible out of the group to get the best deal.
So I guess the big question, finally, is whether or not these two methods can exist in the same service?
The protocol, at this point, is more likely to be SOAP-based, then RESTful, so simply having URLs that both resolve to the same service but simply offer different methods is probably not an option.
What I need, ideally, is a way to reveal operations in the WSDL based on role. Obviously the documentation given to either group would reflect only the operation appropriate for the role, but ideally the developer/client would a) not see any operations they shouldn't and b) receive the same type of response for trying to use a forbidden response as they would a non-existent one and c) most ideally, receive the former-mentioned error because for their role the operation really DOESN'T exist, not because the service took extra precaution in case the client did try (which it will, FYI, but I don't want that to be the first and only level of obfuscation).
Am I dreaming the impossible dream?
Quick Addendum
I should have been more specific about this, I realize. When I say "role-based" I am referring to service-accounts, not user-accounts. So in my hypothetical situation above, the user-service app that would all for querying any user ID would be using one service-account with the privs to do so, not checking the role of the agent logged in to the session (which would be done to get to the app, obviously, but not to the service).
Why not have two methods:
GetMyBirthday();
GetBirthday(string userName);
Any user can call the first method; only privileged users can call the second method. You use role-based authorization and reject calls to the second method from unauthorized users.
I don't see why you'd want to hide methods in the WSDL based on roles. In many cases you'll be accessing the WSDL only to build a proxy in a development environment, and won't need it at runtime.

How can I uniquely identify a desktop application making a request to my API?

I'm fleshing out an idea for a web service that will only allow requests from desktop applications (and desktop applications only) that have been registered with it. I can't really use a "secret key" for authentication because it would be really easy to discover and the applications that use the API would be deployed to many different machines that aren't controlled by the account holder.
How can I uniquely identify an application in a cross-platform way that doesn't make it incredibly easy for anyone to impersonate it?
You can't. As long as you put information in an uncontrolled place, you have to assume that information will be disseminated. Encryption doesn't really apply, because the only encryption-based approaches involve keeping a key on the client side.
The only real solution is to put the value of the service in the service itself, and make the desktop client be a low-value way to access that service. MMORPGs do this: you can download the games for free, but you need to sign up to play. The value is in the service, and the ability to connect to the service is controlled by the service (it authenticates players when they first connect).
Or, you just make it too much of a pain to break the security. For example, by putting a credential check at the start and end of every single method. And, because eventually someone will create a binary that patches out all of those checks, loading pieces of the application from the server. With credentials and timestamp checks in place, and using a different memory layout for each download.
You comment proposes a much simpler scenario. Companies have a much stronger incentive to protect access to the service, and there will be legal agreements in effect regarding your liability if they fail to protect access.
The simplest approach is what Amazon does: provide a secret key, and require all clients to encrypt with that secret key. Yes, rogue employees within those companies can walk away with the secret. So you give the company the option (or maybe require them) to change the key on a regular basis. Perhaps daily.
You can enhance that with an IP check on all accesses: each customer will provide you with a set of valid IP addresses. If someone walks out with the desktop software, they still can't use it.
Or, you can require that your service be proxied by the company. This is particularly useful if the service is only accessed from inside the corporate firewall.
Encrypt it (the secret key), hard-code it, and then obfuscate the program. Use HTTPS for the web-service, so that it is not caught by network sniffers.
Generate the key using hardware speciffic IDs - processor ID, MAC Address, etc. Think of a deterministic GUID.
You can then encrypt it and send it over the wire.