AWS: Is it possible to share DynamoDB items across multiple users? - amazon-web-services

By looking at the documentation on DynamoDB, I was able to find some examples of restricting item access for users based on the table's primary key. However, all of these examples only cover restricting access to a single user. Is there a way to allow access only for a group of users? From what I've read, this would come down to creating IAM groups/roles, but there is a limit on how many of each can be created, and it doesn't seem like doing so programmatically for each item would work well.

Your guess is correct; you would need an IAM policy per shared row.
There are no substitution variables currently available as far as I know to get the group(s) a user is part of, so no single IAM policy will be able to cover your use case.
Not only that, only the partition key can be matched with conditions in the IAM policy, so unless your partition key has a group name as part of it (which implies that users can never change groups) you will require, as you imply, an IAM policy per row in the database, which won't scale.
It could be acceptable if you have controls in place to limit the number of shared items, and are aggressive about cleaning up the policies for items that are no longer shared.
I don't think using AWS's built-in access controls to allow group access is going to work very well, though, and you'll be better off building a higher-level abstraction on top that does have the access control you need (using AWS Lambda, for example).

Related

Trying to exclude specific files from AWS S3 Lifecycle Configuration

We need to implement an expiration of X days of all customer data due to contractual obligations. Not too big of a deal, that's about as easy as it gets.
But at the same time, some customers' projects have files with metadata. Perhaps dataset definitions which most definitely DO NOT need to go away. We have free reign to tag or manipulate any of the data in any way we see fit. Since we have 500+ S3 buckets, we need a somewhat global solution.
Ideally, we would simply set an expiration on the bucket and another rule for the metadata/ prefix. Except then we have a rule overlap and metadata/* files will still get the X day expiration that's been applied to the entire bucket.
We can forcefully tag all objects NOT in metadata/* with something like allow_expiration = true using Lambda. While not out of the question, I would like to implement something a little more built-in with S3.
I don't think there's a way to implement what I'm after without using some kind of tagging and external script. Thoughts?
If you've got a free hand on tagging the object, you could use both prefix and / or a tag filter with S3 lifecycle.
You can filter objects by key prefix, object tags, or a combination of both (in which case Amazon S3 uses a logical AND to combine the filters).
See Lifecycle Filter Rules
You could automate the creation and management of your lifecycle rules with IaC, for example, terraform.
See S3 Bucket Lifecycle Configuration with Terraform
There's a useful blog on how to manage these dynamically here.
What's more, using tags has a number of additional benefits:
Object tags enable fine-grained access control of permissions. For
example, you could grant an IAM user permissions to read-only objects
with specific tags.
Object tags enable fine-grained object lifecycle management in which
you can specify a tag-based filter, in addition to a key name prefix,
in a lifecycle rule.
When using Amazon S3 analytics, you can configure filters to group
objects together for analysis by object tags, by key name prefix, or
by both prefix and tags.
You can also customize Amazon CloudWatch metrics to display
information by specific tag filters.
Source and more on how to set tags to multiple Amazon S3 object with a single request.

Change AWS SageMaker LogGroup Prefix?

We have applications for multiple tenants on our AWS account and would like to distinguish between them in different IAM roles. In most places this is already possible by limiting resource access based on naming patterns.
For CloudWatch log groups of SageMaker training jobs however I have not seen a working solution yet. The tenants can choose the job name arbitrarily, and hence the only part of the LogGroup name that is available for pattern matching would be the prefix before the job name. This prefix however seems to be fixed to /aws/sagemaker/TrainingJobs.
Is there a way to change or extend this prefix in order to make such limiting possible? Say, for example /aws/sagemaker/TrainingJobs/<product>-<stage>-<component>/<training-job-name>-... so that a resource limitation like /aws/sagemaker/TrainingJobs/<product>-* becomes possible?
I think it is not possible to change the log streams names for any of the SageMaker services.

AWS IAM Allow Only Resource Creation

I'm trying to solve a problem with AWS IAM policies.
I need to allow certain users to only delete/modify resources that are tagged with their particular username (This I've solved) while also being able to create any new aws resource.
The part I haven't solved is need to be able to create resources without ability modifying any existing resources (unless they have the right tag).
Is there an existing AWS policy example that allows a user to create any resource (without granting delete/modify)? Is there a way to allow this without having to list every single aws offering and continuously update it for new offerings?
AdministratorAccess will give all rights to create all services.
See https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html#jf_administrator
I managed to solve this problem with a rather ugly solution, but as far as I can tell it's the only solution.
I found a list of all aws actions: https://github.com/rvedotrc/aws-iam-reference
I then parsed out potentially troubling functions like anything with Delete or Terminate in the action name. I used vim/grep for this.
After that I broke that up into multiple aws_iam_group_policy statements. Each statement was attached to a corresponding group. The target users are then added to each of those groups.
Unfortunately, this is pretty ugly and required 5 different groups and policies, but it's the solution I arrived at.

Glacier policy for IAM to have full access to only vaults they've created?

There are similar questions around but none seem to quite answer me directly (or I'm too new with AWS to connect the dots myself.) Apologies if this was easily searchable. I've been trying for many days now.
I want to create a policy that I can assign to IAM users for my Glacier that will allow any IAM user the right to create a vault and then allow them access to most rights for the vaults that they've created. (basically all but delete)
The use case/scenario is this: I have multiple Synology NASes spread at multiple sites. I presently have them all backing up to the glacier account each using their own IAM creds. So far so good.
The problem becomes when they need to do a restore (or even just a list vaults) they see all vaults in the account. I do not want them to see other NAS's vaults/backups as it can be confusing and is irrelevant to that site.
So far I'm simply doing all Glacier ops myself but this will not scale for us. (We intend to add about 25 more NASes/sites, presently running about 8-10)
My assumption is that I should be able to do this somehow with a condition statement and some variant of vaults/${userid} but not quite finding/getting it.
I can't affect anything at vault creation (like adding a tag) because it's the Synology Glacier app creating the vault so no way to mod that.
I've seen some solutions for like EC2 that use post-hoc tagging. I'd prefer not to go that route if I can avoid it as it involves other services we don't use and I know little to nothing about (CloudStream(?), CloudWatch(?) and Lambda I think).
I've also thought of multiple and linked accounts which, if it's the only way then I will, but with no ability to move vaults to the new account - meaning gotta start over for these 8 - it's a less attractive option.
Seems like a policy for this should be easy enough. Hoping it is and it's just a few clicks over my current Policy writing skills.

Amazon AWS: DynamoDB requirements

Objective: Using iPhone app, I would like the users store objects in DynamoDB and have Fine-Grained Access Control for the objects using IAM with TVM.
The objects will contain only Strings, no images/file storage -- I'm thinking I won't need an S3?
Question: Since there is no server-side application, do I still need an EC2 Instance? What all suite of AWS services will I have to subscribe to in order to accomplish my objective?
You can use either DynamoDB (or S3), and neither of them would require an EC2 instance - there is no dependency.
If it was me, I'd first see if I could get what I wanted down in S3(because you mentioned it as a possibility), and then go to DynamoDB if I couldn't (i.e. I wanted to be able to run agregation queries across my data set). S3 will be cheaper and depending on what your are doing, may even be faster and would allow you to globally distribute the stored data thru CloudFront easily, which if you have a globally diverse user base may be beneficial.