Fine grained access with dynamoDB and user groups - amazon-web-services

I am trying to setup a web application using aws Amplify. It is backed by a dynamodb table containing some data, with "user group" as primary key. So whenever a user is logged, it should only display data connected to his group. I read some aws docs about fine-grained access, but it seems that the only way is to use "user_id" as primary key (hopefully i misunderstood that). If any of you could give me some tips i would be very thankful :)

If I understand correctly you are trying to apply some sort of row isolation strategy on your dynamo table.
Your intuition about the primary key is right as stated here with the caveat that your partition key should contain your group_id (not necessarily the user_id) since you want to isolate access by groups.
Using composite partition keys is common practice in a database like Dynamo.
This will structure the DB so that it can be restricted by a IAM policy that allows for the fine grained access control on the groups that you need.

Related

Control access to Amazon DynamoDB entries using an entitlements table for federated users

I want to build an AWS architecture for a serverless application which stores files in a DynamoDB.
This database stores data which relates to a given perimeter. On the other hand I have data (M:N links) which link users of my application to some perimeters.
I want to make sure that my users (Authenticated on Amazon Cognito via a federated OIDC provider) only access to the data related to one of their perimeters.
What is the best practice to implement this kind of access control logic with Amazon bricks ?
Is it possible to accomplish such access control logic with IAM policies at the Dynamo DB level ?
You can add a table
UserPerimeter
---
id (hash key)
userId (index - hash key)
perimeterId
And as part of your validation in your Lambda, you do a query on the index with the the user id from JWT/Cognito. This will check if he has access to the requested parameter. So basically protect your DB from your code (which is the only point of access).
You can achieve this from IAM, (check this article) but it adds too much complexity for my taste. This would be useful if the DB is used by multiple products/components/companies (which isn't a good practise anyway).

AWS DynamoDB permissions based on other data

Is there any way of controlling access to DynamoDB data based on data in other tables? By way of comparison, Firebase Realtime Database rules have access to a snapshot of the entire database when being evaluated, so rules like this are possible:
".write": "root.child('allow_writes').val() === true"
But all my reading of the AWS permissions structure hasn't given me any clue how to achieve the same thing. There are variables that can be tested based on the current authenticated user, and some variables based on the current request, but no way I can see of referencing other data within the database.
AWS don't support this case, you're only option would be to put the access control in your application.
You can control table, item or attribute level data access in DynamoDB using a IAM policy variables. Frustratingly AWS don't even seem to publish a list of available policy variables. Typically it boils down to using Cognito sub or AWS userid, which the majority of people don't want to use as a partition keys in their tables.

AWS DynamoDB restrict access to an attribute-value in IAM

I have an AWS DynamoDB table which consists of one key and some attributes.
AWS IAM allows to restrict the access to a specific key: Docs.
But is it possible to "filter" by the value of an attribute?
For example:
"Allow access to all rows, where attribute_A = 1"
I didn't find anything like that after searching for hours. Thank you in advance!
Unfortunately you only have two options when restricting access via IAM:
Control access via Partition Key
Control access to attributes returned
It is not possible to restrict access via an attribute which isn't the Partition Key.
A workaround you could look into:
Leverage a GSI and use the Partition Key as your desired attribute you wish to filter by. You can restrict a user to only query the given Index and filter by the Partition Key of the GSI.
Read more here: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/specifying-conditions.html
HTH
Don't know detailed information about exact access case but IAM policies for DynamoDB can be much fine-grained.
Please refer detailed explanation here.
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/specifying-conditions.html

DynamoDB with Cognito limitations

I am trying to implement a public file sharing system for my application using AWS Cognito & DynamoDB. Basically users can create and sign into an account using Cognito and use this account to upload their files. Public meta data that needs to be accessed frequently goes to DynamoDB (such as ratings, download count, upload date, etc.) and the files itself to an S3 bucket.
To ensure that only the Cognito user who shared the file is allowed to delete the DynamoDB item and modify certain private attributes, I am using the Cognito identity id as the primary key for my items inside the DynamoDB, coupled with a policy rule as described in the docs. Afaik there is no other solution.
So far so good, but this obviously means that a user cannot upload more than 1 item to the database since the primary key attributes of DynamoDB items need to be unique, which is not possible since I am using the Cognito identity id for them.
I could of course create one item for each user and store the meta data for each file he owns inside maps, but this wouldn't allow me to query the items by date, rating, etc.
I'm honestly stuck and cannot think of a way to structure my database items any other way to make this work. Is this even feasible with DynamoDB?
You can create a range key with a unique id for each file, while maintaining the primary key as Cognito id which allows to keep the DynamoDB fine-grained authorization.

Multiple Access Keys for the Same User

I have discovered that there is a particular User that is enabled for programmatic access in my company's AWS account. I have been tasked with recreating an Access Key and Security Token for one of my colleagues, despite it already having one. I want to deactivate the original one. I feel that from a security standpoint, it is better to have only one Access Key/Token rather than multiples.
Can anyone tell me if this is a good choice to have? One of my colleagues has asked me why I would want to do that and when I told him my reasoning, I dont think he was a 100% convinced my reasoning was good. Can you please tell me if there are any advantages to having multiple access keys/secret keys to the same user? Because I can't think of any. Also, can you please provide any kind of supporting articles that would cover this?
I don't have docs for recommending a single access key per user, but AWS does recommend rotating access keys regularly. See Managing Access Keys for IAM Users, the section titled "Rotating Access Keys".
So you should, as best practice, do the following on a regular schedule (every 30, 60, 90 days, etc.)
Create a second access key for your user
Wherever you are using the first access key, replace it with the second
Wait a short time, and confirm the first access key is not being used.
After confirmation, disable or delete the first access key
The two access key system is to allow for this rotation to occur while keeping the time where an access key is disabled/deleted, but still being used to a minimum. I've been bit in other tools where you have to disable the old key when you generate a new key. Because sometimes it takes time to put the new keys in use after they're generated.
If a user needs more than one access key, then there should be a question why one needs to be, rather than multiple. There are benefits to using multiple users:
The permissions can be more granular
If a key gets leaked, there are fewer places where it needs to be replaced
You have a better audit trail of what tools are acting on your account, and when
For these reason, I recommend only having one access key "in the field".
I think, really, if someone wants to actually use 2 keys for a single user, they're just being lazy.
I create individual IAM users and roles for every tool that needs access. I never reuse them.
Update
AWS recommends rotating access keys on a regular schedule.
Source: http://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html
Further, their "howto" on the key rotation process uses both allocated access keys on an IAM user:
Source: https://aws.amazon.com/blogs/security/how-to-rotate-access-keys-for-iam-users/
Ergo, target for one access key "in use" per IAM user at any given time.
Under AWS IAM Access Keys best practices I believe these sections apply:
Use different access keys for different applications. Do this so that you can isolate the permissions and revoke the access keys for individual applications if an access key is exposed. Having separate access keys for different applications also generates distinct entries in AWS CloudTrail log files, which makes it easier for you to determine which application performed specific actions.
Rotate access keys periodically. Change access keys on a regular basis. For details, see Rotating Access Keys (AWS CLI, Tools for Windows PowerShell, and AWS API) in the IAM User Guide and How to Rotate Access Keys for IAM Users on the AWS Security Blog.
The first item clearly gives a reason to use multiple access keys with a single IAM account. I think using multiple keys would also make the second item, key rotation easier. You could create a second access key set, switch your applications over, verify that the previous set is no longer being used to access the AWS API, and then delete the old set.