currently I am working with AWS DynamoDB and I struggle with user authorization and restricting access to specific items inside of a DynamoDB Table.
I have already read the documentation and came across multiple blog posts, but unfortunately I haven't found my use case yet.
Some background information:
Each user of the web app belongs to a company and each company has multiple orders. These orders are inside of the DynamoDB table "Orders". What I want to achieve is that the users can only read order items from the company they belong to.
My Approach
My idea was to create the "Orders" table with a partition key of "companyId" and a sort key of "orderId". During my research I figured out that I can restrict the access through IAM Policy roles, but I couldn't find a way to access the companyId of the user inside of the policy role. Users are authenticating through AWS Cognito.
My Question
How can I restrict the user access specific items inside of a DynamoDB? Taking into account the each user belongs to a company and should only see orders of this company.
Looking forward to some help!
AWS has published Isolating SaaS Tenants with Dynamically Generated IAM Policies blog on their website. This blog explains exactly the thing that you want to achieve.
In short, I can explain:
Use CustomerId as PartitionKey
Create an IAM policy with access on Orders table like below
1 {
2 "Version": "2012-10-17",
3 "Statement": [
4 {
5 "Action": [
6 "dynamodb:GetItem",
7 "dynamodb:BatchGetItem",
8 "dynamodb:Query",
9 "dynamodb:DescribeTable"
10 ],
11 "Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/Orders",
12 "Effect": "Allow"
13 }
14 ]
15 }
Create a template for Session Policy where you will replace CustomerId with incoming request's customerId on runtime.
1 {
2 "Effect": "Allow",
3 "Action": [
4 "dynamodb:*"
5 ],
6 "Resource": [
7 "arn:aws:dynamodb:*:*:table/{{table}}"
8 ],
9 "Condition": {
10 "ForAllValues:StringEquals": {
11 "dynamodb: LeadingKeys": [ "{{customerId}}" ]
12 }
13 }
14 }
Now, invoke STS (Security Token Service) with above IAM & Session policy to get temporary credentials that has access limited to a single tenant/customer data.
Below is the pseudo code, you can use your programming language's SDK to write below code.
AssumeRoleResponse response = sts.assumeRole (ar -> ar
.webIdentityToken(openIdToken)
.policy(scopedPolicy)
.roleArn(role));
Credentials tenantCredentials = response.credentials();
DynamoDbClient dynamoDbClient = DynamoDbClient.builder()
.credentialsProvider(tenantCredentials)
.build();
Finally, create DynamoDBClient object using these temporary credentials and use it. This object of DynamoDBClient will have access to only current user's customer data.
Hope this should help!
Using custom attributes, you can create a backend layer that will check these parameters, query DynamoDB with the specified attribute, and return them - https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-attributes.html
The second option is to set up role for each company - https://docs.aws.amazon.com/cognito/latest/developerguide/role-based-access-control.html#using-rules-to-assign-roles-to-users
Related
Background
I have a Spectrum schema referencing a Glue Data Catalog (my_spectrum_schema). The external schema was created with an IAM role (s3_glue_role) with AWSGlueServiceRole and AmazonS3ReadOnlyAccess and a trust relationship like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "redshift.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"ForAnyValue:StringEquals": {
"sts:ExternalId": [
"arn:aws:redshift:<region>:<acct>:dbuser:<cluster>/<admin_user>"
]
}
}
}
]
}
The role is attached to the cluster, and admin_user is able to query tables in my_spectrum_schema both from the AWS Console and also from a local SQL client.
Issue
I want to allow members of an existing Redshift group to query tables in the schema.
Based on syntax I found here in the AWS docs, I expected that adding the group data_group to the trust policy would allow members of the group to query the data:
"sts:ExternalId": [
"arn:aws:redshift:<region>:<acct>:dbuser:<cluster>/<admin_user>",
"arn:aws:redshift:<region>:<acct>:dbgroup:<cluster>/<data_group>"
]
However, queries under any user in data_group throw this error:
Query 1 ERROR: ERROR:
-----------------------------------------------
error: Not authorized to get credentials of role arn:aws:iam::<acct>:role/<s3_glue_role>
code: 30000
context:
query: 0
location: xen_aws_credentials_mgr.cpp:402
process: padbmaster [pid=6826]
-----------------------------------------------
Attempted resolutions
I tried dropping and re-creating the schema after the trust policy had been updated: no effect.
I tried changing dbgroup to dbuser for data_group: (predictably) didn't work.
Searched for similar SO issues. Allow AWS Redshift Cluster DB user group to assume role is related but doesn't have any answers.
I added a user from data_group explicitly to the trust relationship, which resolved the error and allows the user to query tables in the schema:
"sts:ExternalId": [
"arn:aws:redshift:<region>:<acct>:dbuser:<cluster>/<admin_user>",
"arn:aws:redshift:<region>:<acct>:dbgroup:<cluster>/<data_group>",
"arn:aws:redshift:<region>:<acct>:dbuser:<cluster>/<single_user>"
The workaround isn't ideal because it requires me to manage the data_group and the trust policy separately. When users and added or removed from the group, the trust policy should reflect that.
Question
Is there a different way to specify a Redshift group in the trust policy that will behave the way I'm expecting, or is there another approach altogether to accomplish what I'm trying to do that doesn't feel like an anti-pattern?
I would like to add an image upload possibility for my users.
So far I've followed a simple YouTube tutorial and created a new bucket with the following Bucket policy:
{
"Version": "2012-10-17",
"Id": "Policy1578265217545",
"Statement": [
{
"Sid": "statement-1",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket/images/*"
}
]
}
And the following CORS policy:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"POST",
"DELETE",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
I've also created an IAM user, and attached the following policy to it:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "statement1",
"Effect": "Allow",
"Action": [
"s3:Put*",
"s3:Get*",
"s3:Delete*"
],
"Resource": [
"arn:aws:s3:::my-bucket/*"
]
}
]
}
I got my access and secret keys that I successfully used to upload/delete files – success.
I have a strong feeling, the above policies are not really secure at this moment (e.g. I'm planning to make the CORS policy more strict, by only allowing the bucket to be accessed from a certain domain).
My main question now is – How can I make sure that if user A uploads his image, no other user (until allowed) can access it?
I think this would be possible if each user of the application has an IAM user account in AWS. Then you could have restrict the images using the corresponding AWS IAM user. But I believe this is probably not the case.
Something better would be, instead of accessing the images directly on AWS, access the images via your application. You could have a table storing the image path in the bucket on AWS, the corresponding owner(s) and also a flag indicating if the image can be accessed publicly or not.
Then when you need a specific image, you would make a request to your application, which would check if the user making the request is the owner of the image, if yes, the application would download the image from AWS using the AWS S3 SDK and send it over to the user.
This approach will decouple AWS from your end users and your app will be responsible for managing who can access what. Given every request to AWS will pass through your app, there is less risk on compromising the AWS infrastructure in place.
Object tagging and attribute-based access control could be used for conditional access to different objects.
Use case: Application not supporting individual IAM users:
Objects are assigned ownerID tag with id value,
Users are assigned an uuid or their profile has a tag with some kind of id value and
API function used to fetch objects compares object tag and user id/tag and retrieves only objects with matching values
Use case: Application supporting AWS IAM users / SSO users:
Objects are assigned a tag with appropriate value (id,
department, etc),
AWS users are assigned a tag with appropriate value
(id, department etc.),
An IAM role and an access control policy are
created for allowing conditional access depending on tag values
https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging-and-policies.html
I have a web app where users (logged in via Cognito with an ID Token JWT) can upload/download files from an S3. Users should only access S3 resources related to their organization. For that, I'm thinking of separating S3 path by organization:
"arn:aws:s3:::my_bucket/org1"
"arn:aws:s3:::my_bucket/org2"
"arn:aws:s3:::my_bucket/org3"
Question
How do you secure the S3 paths ("folders") so that users can only access resources for their org? I did some research, see options below - but is there an easier way? This seems like a fairly common use case. Thanks!
Options I've considered
EDIT: I initially thought of using JWT claims (#1 below). But it's dawning on me that AWS prefers the use of "Identity Pools" for this kind of thing. This makes sense because you can connect different IdP's (ex Auth0) to an Identiy Pool, and set policies using Identities (Identity Role -> Identity Policy). So I'm going to try that and report back if that works.
S3 Policy using JWT claims. Haven't found docs on using JWTs, so not sure if this is possible. Use the JWT claim as the path selector in the s3 policy. Each user has a custom claim called "organization".
User 1 has JWT with claim "organization: organization1"
User 2 has JWT with claim "organization: organization2"
etc.
Sample bucket policy [1] (not sure if syntax is correct)
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::mybucket"],
"Condition": {"StringLike": {"s3:prefix": ["${cognito-identity.amazonaws.com:organization}/*"]}} // here
},
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::mybucket/${cognito-identity.amazonaws.com:organization}/*"] // here
}
]
}
IAM policies with the Identity Pool's "Role Based Access Control" feature [0].
Assign each cognito user a Group value in the User Pool. This is their organization (ex. "organization1").
Map the Group to an IAM Role called "OrgRole" w/ an IAM policy that only allows s3 access to a path with their organization.
Something like this...
User | Cognito Group | IAM Role | IAM Policy S3 resource Mapping
user1 | organization1 | OrgRole | my-bucket/organization1/*
user2 | organization2 | OrgRole | my-bucket/organization1/*
user3 | organization2 | OrgRole | my-bucket/organization2/*
user4 | organization3 | OrgRole | my-bucket/organization3/*
Sample IAM policy for this role
"Resource": ["arn:aws:s3:::my_bucket/${current-users-cognito-group}/*"]
Cons: Complicated setup and per [0], only supports 25 groups per user pool which doesn't scale. Which makes me think my setup is incorrect.
[0] https://aws.amazon.com/blogs/aws/new-amazon-cognito-groups-and-fine-grained-role-based-access-control-2/
[1] https://docs.aws.amazon.com/cognito/latest/developerguide/iam-roles.html
[2] https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/
I'm having a problem accessing a new DynamoDB table via a successfully authenticated Cognito user.
I get the following AccessDeniedException when attempting a scan of the table (using the AWS JavaScript SDK):
Unable to scan. Error: {
"message": "User: arn:aws:sts::MY-ACCOUNT-NUM:assumed-role/Cognito_VODStreamTestAuth_Role/CognitoIdentityCredentials
is not authorized to perform: dynamodb:Scan on resource: arn:aws:dynamodb:us-east-1:MY-ACCOUNT-NUM:table/VideoCatalog",
"code": "AccessDeniedException",
"time": "2019-01-27T02:25:27.686Z",
"requestId": "blahblah",
"statusCode": 400,
"retryable": false,
"retryDelay": 18.559011800834146
}
The authenticated Cognito user policy has been extended with the following DynamoDB section:
{
"Sid": "AllowedCatalogActions",
"Effect": "Allow",
"Action": [
"dynamodb:BatchGetItem",
"dynamodb:GetItem",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:UpdateItem"
],
"Resource": [
"arn:aws:dynamodb:us-east-2:MY-ACCOUNT-NUM:table/VideoCatalog"
]
}
Shouldn't this be sufficient to give my authenticated Cognito users access to any DynamoDB table I might create, as long as I specify the table resource as I do above? Or do I also need to add "Fine-grained access control" under the table's 'Access control' tab?
I can say that I created the VideoCatalog DynamoDB table under my non-root Administrator IAM role (represented above by MY-ACCOUNT-NUM). Is that a problem? (Prior to trying to move to a DynamoDB table I was using a JSON file on S3 as the video catalog.)
IAM confused!
Looking at the error message from AWS and the policy document that you provided, I can see that there are two different regions here.
AWS is saying that your user does not have access to aws:dynamodb:us-east-1:MY-ACCOUNT-NUM:table/VideoCatalog, whereas your policy document is providing access to aws:dynamodb:us-east-2:MY-ACCOUNT-NUM:table/VideoCatalog.
Are you perhaps provisioning your resources in two different regions by mistake?
I'm having trouble understanding how to use fine-grained access control on DynamoDB when logged in using Cognito User Pools. I've followed the docs and googled around, but for some reason I can't seem to get it working.
My AWS setup is listed below. If I remove the condition in the role policy, I can get and put items no problem, so it seems likely that the condition is the problem. But I can't figure out how or where to debug policies that depend on authenticated identities - what variables are available, what are their values, etc etc.
Any help would be greatly appreciated!
DynamoDB table
Table name: documents
Primary partition key: userID (String)
Primary sort key: docID (String)
DynamoDB example row
{
"attributes": {},
"docID": "0f332745-f749-4b1a-b26d-4593959e9847",
"lastModifiedNumeric": 1470175027561,
"lastModifiedText": "Wed Aug 03 2016 07:57:07 GMT+1000 (AEST)",
"type": "documents",
"userID": "4fbf0c06-03a9-4cbe-b45c-ca4cd0f5f3cb"
}
Cognito User Pool User
User Status: Enabled / Confirmed
MFA Status: Disabled
sub: 4fbf0c06-03a9-4cbe-b45c-ca4cd0f5f3cb
email_verified: true
Role policy for "RoleName"
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:PutItem"
],
"Resource": [
"arn:aws:dynamodb:ap-southeast-2:NUMBER:table/documents"
],
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": [
"${cognito-identity.amazonaws.com:sub}"
]
}
}
}
]
}
Login information returned from cognitoUser.getUserAttributes()
attribute sub has value 4fbf0c06-03a9-4cbe-b45c-ca4cd0f5f3cb
attribute email_verified has value true
attribute email has value ****#****com
Error message
Code: "AccessDeniedException"
Message: User: arn:aws:sts::NUMBER:assumed-role/ROLE_NAME/CognitoIdentityCredentials is not authorized to perform: dynamodb:GetItem on resource: arn:aws:dynamodb:ap-southeast-2:NUMBER:table/documents
The policy variable "${cognito-identity.amazonaws.com:sub}" is not the user sub which you get from Cognito user pools. It is in fact the identity id of a user which is generated by the Cognito Federated Identity service when you federate a user from Cognito User Pools with Federated identity service.
Since, the value in "${cognito-identity.amazonaws.com:sub}" never matches what you have in your DynamoDB row, it fails with AccessDenied. For this to work, the userId in your Dynamo entry should actually be the identity id, not sub. Currently, there is no direct link between IAM policy variables and Cognito User Pools service.
Here are some doc links which might help.
1. IAM roles with Cognito Federated Identity Service
2. Integrating User Pools with Cognito Federated Identity Service