How to structure s3 bucket for access control - amazon-web-services

I know that this could be a trivial problem but I think is important to do things in the right way.
We have an internal application that is used by 80 users now and we want to migrate our storage to s3.
We have 3 environments: dev, test, prod and I was thinking on s structure like this:
dev
user-1
...
user-n
assets (profile picture, other public data)
generated documents (private)
test
prod
In this part we have 3 user rights (ROLE_USER, ROLE_TEAMLEAD, ROLE_ADMIN). Who has role of user should be able to access only his/she's objects, who has role of teamleader can access also all the documents of his team, and who has ADMIN can access all the documents.
What is the safest way to design this, so that when I make a call after an object and a userId/username to get back all the objects that belong to that person.
Should here be a good idea to create groups (should also be easy to update if a teamlead leaves, or if a user changes his/she's teamlead) and also to have aws accounts for all our users?
Any idea/good material will help, thanks.

If your users are IAM (or cognito) users, the structure you have can't accomplish the access control goals with static policies. If you're able to update the IAM policies when membership changes, then the structure can work.
Your IAM policy condition for regular users or admins would be pretty simple to meet the objectives. Each user accessing their own bucket can be allowed by a bucket policy allowing the S3 actions conditioned on the key prefix being their username (${aws:Username} policy variable). Granting access for admins can be done through a group policy on the admin group.
The problem is you have is with the team lead roles. Here, you have two dimensions of access: user and role, but the file structure contains just one of those pieces of information -- you can't determine which objects should belong to a particular teamlead role by the object structure alone. That is, you can't construct a group/bucket policy that grants access according to the requirements without knowing all the usernames in that group (since directories are organized by user only).
This could be fixed if you organized your structure by nesting users within team directories:
team1
user1
user2
team2
user3
user-N
Then you could apply a group policy for each teamlead group to allow access objects under the team directory for the respective team. The IAM policy would not have to change when teamleads or team members change. This is also consistent with the Controlling access to a bucket with user policies guide.
However, this implies a strictly one-to-one relationship between users and teams, which may not be the case for you. And, if users change teams, they'll need their directory in S3 moved.
Alternatively, using the structure you propose, you could generate IAM policies based on group membership at a moment in time, specifying all the users directories belonging to a particular team in the policy. However, whenever the group membership changes, the policy will have to change, too.
As an aside, you may also want to consider using separate buckets for your different environments instead of top level directories. That way, you can effectively test changes that affect the entire bucket (like applying bucket policies) independently for each environment.

Related

GCS bucket should be accessible from only specific people

I have created one GCS bucket in my GCP project where i will save some html files. Users will access that html files from browser using object URL.
I want that html url to be accessible from only specific people who belong to my organization. Even if someone from outside hits that URL, data should not be accessible to them.
Whats the way to do that?
In (1) you can find the two available methods for controlling the access to the objects inside your Google Cloud Storage buckets:
-Uniform: You will provide access to different users depending on the IAM roles you grant to them (2). All of the objects inside the same bucket will share the same policy (you can also define a group of objects with the same prefix instead of the whole bucket)
-Fine-grained: Apart from the IAM roles, you can also use Access Control Lists (3) for defining special policies for each object. This way, a user can have access to only some of the objects inside your bucket.
Once you have defined the appropriate policies according to the desired permissions to be granted to each user, you will need to share with them the Authenticated URL of the object and, according to the GCS UI, "Only users granted permission can access the object with this link".
On the contrary, If you would like to make an object publicly available for everyone in the Internet, you can follow (4) in order to create a Public URL for a certain object. This is also depicted in (5):
Most of the operations you perform in Cloud Storage must be
authenticated. The only exceptions are operations on objects that
allow anonymous access. Objects are anonymously accessible if the
allUsers group has READ permission.

Google BigQuery: grant service account permissions to create jobs in only some specific datasets

Problem: I have a project in BigQuery where all my data is stored. Within this project I created multiple datasets containing different views. Now I want to use different service accounts to query the different datasets containing different views via grafana (if that matters). These users should only be able to query the views (and therefore a specific dataset) meant for them.
What I tried: I granted BigQuery User, Viewer or Editor permissions (I tried all of them) at a dataset level (and also BigQuery Meatadata Viewer at a project level). When I query a view, I receive the error:
User does not have bigquery.jobs.create permission in project xy.
Questions: It is not clear to me if granting bigquery.jobs.create permission on project level, will allow the user to query all datasets instead of only the one I want him to access to.
Is there any way to allow the user to create jobs only on a single dataset?
Update October 2021
I've just seen that this question did go unanswered for me back then but still gets a lot of views. I believe the possibilities changed a bit since I asked the question so here is how I'm handling it now:
I give the respective service account the role roles/bigquery.jobUser on project level. This allows it to create jobs in general, however since I don't give any other permissions yet it cannot query data yet.
Then I give the role roles/bigquery.dataViewer on the dataset level. That makes it possible for the service account to query only the dataset I granted the permission on.
It is also possible to grant roles/bigquery.dataViewer on table level, what will restrict access to only the specific table.
In case you want the service account not only to query (view) the data, but also to insert or change it for example, replace roles/bigquery.dataViewer with the role having the necessary permissions (or assign that role in addition).
How to grant the permissions:
On dataset level
On table or view level
We had a same problem, how we solved was, created a custom role and assigned the custom role to the particular dataset.
You can grant bigquery.user role to a specific dataset as indicated in this guide. The bigquery.user role contains the bigquery.jobs.create permission as well as other basic permissions related to querying datasets. You can check the full list of permissions for this role in this list.
As suggested above, you can also create custom roles having only the exact permissions you want by following this piece of documentation.

How to easily to assume an IAM user's permissions for testing (without using their credentials)?

In the course of working with AWS I quite frequently run into a situation where I would need to confirm that a certain user or a group indeed has the access they should or should not have (or debug a policy that doesn't work correctly). For this purpose, I have created a "myusername-assumable" role whose permissions I can modify, and then assume it to test the given access. However, the problem is that many users have a complex collection of policies comprised of multiple group memberships in addition to some direct-attached policies. Since a role apparently can't be a member of a group, I currently have had to painstakingly rebuild a user's permissions, policy by policy, to match the desired user's or group's permissions I need to validate. To test an IAM user's permissions I could create a temporary set of keys, of course, but I would like to avoid that as a user can choose to rotate their keys at any point, and the presence of an extra key set would be confusing to them.
So my question is, is there any way (a script, a CLI command set..) to extract all the policies attached to an IAM user directly or via a group, and then reattach those policies to a role? I will eventually script this, but if someone happens to have an existing solution, that would be great!

Can I have dynamic User specific permissions using AWS IAM / Cognito?

I'm attempting to develop an application architecture almost exclusively on top of AWS services.
This application has both User and Organization "entities". As one might except, a User may be an admin, role-x or role-y of one or more organizations. (role-x and role-y are just placeholders for some role with some set of specific permissions. A User may also be standalone (that is, not have a role on any Organization).
Our current thinking is to use DynamoDB to store organization and user specific data. For users this may include some basic information (address, phone number, whatever), and for organizations it may include fields like "mission statement", "business address" and so on.
An admin of an organization would be able to edit all organization fields, whereas a role-x might only be able to update "mission statement" while reading all other fields.
Since I mentioned that a single user may have roles on many different organizations, that might look something like:
user1:
organizations:
123: 'admin'
456: 'role-x'
789: 'admin'
It's also worth noting that these role assignments are modifiable. New or existing users may be invited to take on a specific role for an organization, and an organization may remove a user from a role.
This is a fairly straightforward type of layout, but I wanted to be very clear about the many-to-many nature of the user, org and roles.
I've been reading IAM and Cognito documentation, as well as how it relates to fine-grained control over DynamoDB items or S3 buckets - but many of the examples focus on a single user accessing their own data rather than a many-to-many role style layout.
How might one go about implementing this type of permission system on AWS?
(If policy definitions need to be updated with specific Identities (say, for an Organization), can that reliably be done in a programatic way - or is it ill-advised to modify policies on the fly like that?)
The above answer is outdated.
AWS has added Cognito-Groups recently. That provides more flexibility
You can use technique described in the article to achieve that:
https://aws.amazon.com/blogs/aws/new-amazon-cognito-groups-and-fine-grained-role-based-access-control-2/
Unfortunately the kind of permission system you are trying to implement is not possible with Cognito at the moment. With Cognito you can currently create unique identities for your users in an identity pool. Users can authenticate using any external provider such as Facebook, Amazon, Google, Twitter/Digits or any OpenId Connect Provider. Users can also authenticate through your own backend authentication process. After the user authenticates, Cognito creates a unique identity for that user. There’s a concept of an identity, but there’s no concept of groups. All users/identities within a one identity pool can get credentials from roles associated with that identity pool. Currently you can specify two roles: One role for authenticated identity and one role for unauthenticated identity. There’s no such feature at the moment where you can specify multiple groups for each identity and specify role on that group.
For more information on Cognito, you can refer to
https://aws.amazon.com/cognito/faqs/
http://docs.aws.amazon.com/cognito/devguide/getting-started/

Is it possible to create an IAM Policy that allows the the creation of only limited IAM Permissions?

As part of the process for onboarding new customers, I need to create an S3 bucket, create a new user, and grant that user permissions to get, list, and put to that bucket. I'd like to automate this process, which means that I need to create a "Provisioning" policy that grants a service only the permissions needed to do these things.
It seems pretty straightforward to use String Conditions in my Provisioning Policy to require the names of Users and Buckets start with a certain prefix. However, the PutUserPolicy seems to just take a text blob as its argument. I'd prefer to limit my Provisioning Policy to only be able to create Policies that grant the specific permissions that I need here; ideally only being able to grant users who names match a pattern the ability to get, list, and put to buckets who names match a pattern. (If this Policy is somehow hijacked, I'd prefer to limit their ability to create a user and grant it all privileges.)
Is there any way to get this level of fine-grained control?
No. IAM doesn't go into this much detail. You can't say "Only allow a user to create a policy with these permissions".
You would need to design your system in such a way that this policy can't be hijacked. Create a template policy and just make sure a new user can't inject anything into the inputs.
Also, on a separate note, it would be considered much better practice to use one bucket and give each user a folder inside that bucket. You can still control permissions to a key in a bucket. See the blog for more on this.