According to the documentation which says
Child policies cannot restrict access granted at a higher
level. For example, if you grant the Editor role to a user for a
project, and grant the Viewer role to the same user for a child
resource, then the user still has the Editor role grant for the child
resource.
Does it also mean that if I assign a user restrictive access at higher level but assign more permissive access at resource level, that user will have more permissive access? In other words, more permissive policy will override restrictive policy no matter what at which level more permissive policy is granted at?
Example:
Grant UserA viewer role for a project but assign Editor role at resource level, UserA will have editor level access to the resource?
Does it also mean that if I assign a user restrictive access at higher
level but assign more permissive access at resource level, that user
will have more permissive access?
Yes.
In other words, more permissive policy will override restrictive
policy no matter what at which level more permissive policy is granted
at?
Do not think of it has overriding. Think of it as you are granting additional privileges.
Grant UserA viewer role for a project but assign Editor role at
resource level, UserA will have editor level access to the resource?
Correct, UserA will have editor level for the resource.
Think of the hierarchy being Organization / Folders / Projects / Resources. If you have permissions at a higher level, you have at least those permissions at a lower level. This is similar to a company's organization. If you are V.P. of division (project) you are still V.P. for each group (resource) under that division. The opposite also works. You are a team member for the organization (project Viewer) but you are the manager for one group (compute resources Editor) and just a project Viewer for other resources.
Just to add to above answer,
If during union of the policy, situation of policy conflict occurs then DENY takes precedence.
For example there are below two policies
On folder level ->
Allow storage bucket creation for user x#a.com
On Project1 Level ->
Deny storage bucket creation for user x#a.com
then DENY policy takes precedence and user x#a.com won't be able to create the bucket.
https://cloud.google.com/resource-manager/docs/organization-policy/understanding-hierarchy#reconciling_policy_conflicts
GCP IAM first checks the denied policies attached to the principal and then, goes on to evaluate the allowed policies.
The attached GCP policy evaluation flowchart will help you clear the concept.
https://cloud.google.com/iam/docs/deny-overview#policy-eval
Related
I have two IAM roles in AWS; A and B. In role A I have an explicit deny to prevent certain permissions from being performed in Elastic Map Reduce(EMR). How can I prevent a scenario where Role B could be updated to have an allow on the permissions that were denied in Role A?
I am not very familiar with our IAM federation but my understanding is that users access a federated portal URL and are presented with an initial role that they can select from a radial button based on the AD groups that they are in. From there users can change role/assume role if permissions are setup properly. Currently we have ~150 roles that I would need to ensure do not have the ability to circumvent the explicit deny in Role A.
If possible, it is always better to avoid Deny policies because they often do not work the way people expect. AWS has "deny by default" behaviour, so it is better to control access by limiting Allow permissions.
Unfortunately, many organisations use "Grant All" permissions, such as granting s3:* permissions and giving people Admin permissions. These examples grant too many permissions, which might then need a Deny to override.
Some services (eg Amazon S3, Amazon SQS) also have the ability to apply service-specific policies (eg S3 Bucket Policies) that can grant permissions in addition to IAM.
A good place to start is to strongly limit who has iam: permissions. Only admins should have the ability to use IAM (and that should only be granted via an IAM Role that the admins need to Assume). By controlling such access, it will avoid your scenario where you are worried that an IAM Role could be modified to permit unwanted access.
For worst-case situations where it is vital that access to certain resources are strictly controlled (eg S3 buckets with HR information), a common practice is to create a separate AWS Account and grant limited cross-account access. This way, access will not be granted via generic Admin policies.
I am reading about GCP's IAM policy over here. Now consider this Resource hierarchy.
Let's say I want to give start instance permission (compute.instances.start) of "instance_a" to abc#gcp.com and start instance permission of "instance_b" to xyz#gcp.com. Clearly I cannot create a IAM policy (based on the IAM policy object example mentioned in the article) at "example-test" folder because it will not give me the granularity I am looking for.
Is it possible to achieve this level of granularity in GCP?
The permissions are inherited from the top layer (Organisation) to the lower layer (the resource, in your example the VM). So, if you grant a permission at the project level (Example-test), the permission are inherited in all the resources belonging to the project (instance_A and instance_B).
Thereby, you can't (easily) achieve what your want.
But in fact, you have the possibility to add conditions on the IAM role. You can add condition on the resource name or the resource tag for example to allow or disallow the access for a user or another one.
But use the condition wisely, it could become a nightmare to debug the IAM permission issues if you mix several level (in the hierarchy) of permission definition and different conditions. Try to keep the things homogenous and as simple as possible.
Can we define object level ACL having group in which I can club users from other AWS account. Idea is having a group to which I can add or remove users. I know ACL are maintained at object level and for every new grantee I have to add it in ACL. But having a group assigned to ACL and then modifying that group will be way easier approach. I know it supports predefined groups like authenticated user . Is there a way to create other predefined groups based on application need?
Don't use ACLs, they're a legacy access control mechanism and they're going to bite you.
According to the docs (emphasis mine):
Access control lists (ACLs) are one of the resource-based access
policy options (see Overview of managing access) that you can use to
manage access to your buckets and objects. You can use ACLs to grant
basic read/write permissions to other AWS accounts. There are limits
to managing permissions using ACLs.
For example, you can grant permissions only to other AWS accounts; you
cannot grant permissions to users in your account. You cannot grant
conditional permissions, nor can you explicitly deny permissions. ACLs
are suitable for specific scenarios. For example, if a bucket owner
allows other AWS accounts to upload objects, permissions to these
objects can only be managed using object ACL by the AWS account that
owns the object.
As you can see from the limitations, it doesn't seem suitable for your use case.
There is a better solution for resource-based access policies and it's called bucket policies. They allow you to grant access to prefixes in the bucket based on IAM principals such as users and roles or AWS services, even from other AWS accounts. (note, that IAM groups don't work).
I suggest you review the Access Policy Guidelines before making your decision.
I find myself using a lot roles/storage.legacyBucketWriter which has the following permissions:
storage.buckets.get
storage.objects.create
storage.objects.delete
storage.objects.list
May be it's okay but it feels odd to me to be using a role with legacy in its name ...
I don't want to create a custom role either because it seems overkill since there is this role fitting the need.
And yes there is a role roles/storage.objectAdmin but it lacks the storage.buckets.get permission.
What do you think ?
Remember legacy roles are related to primitive roles on GCP, It is 'Legacy' because it exactly matches the pre-IAM permissions granted via the legacy role on an Object. It all depends on your use case, the best recommended practice is to follow the principle of least privilege.
Keep in mind as is mentioned at the official documentation:
Legacy Bucket IAM roles work in tandem with bucket ACLs: when you add
or remove a Legacy Bucket role, the ACLs associated with the bucket
reflect your changes.
Also, consider the scope of the read/write legacy roles as is described on this table.
Finally take a look at the section of Security, ACLs, and access control for follow the best practices recommended for Cloud Storage service.
Consider using the Storage Admin Role (roles/storage.admin) instead of a legacy role. This role grants both storage.buckets.* and storage.objects.* permissions, making it a suitable match for your requirements.
According to the Storage Admin Role description, this role
grants full control over buckets and objects. When applied to an individual bucket, control is limited to that specific bucket and its associated objects.
This role is particularly useful when utilizing the gsutil rsync command targeting a bucket.
I read https://aws.amazon.com/blogs/security/iam-policies-and-bucket-policies-and-acls-oh-my-controlling-access-to-s3-resources/, which to me answered the what and when, but not why.
I’m new to AWS and trying to learning it.
Why create two methods, and not just one? Based on the example in the article, why not just have IAM policy to handle both use cases, example like having IAM policy JSON include the ‘Principal’ entry as “base on association” or “??”, then it work like the Bucket policy. It look like for any services which require policy control will have another type of policy created. Example; YYY service policy will have a Principal and Action “YYY:”.
Reason I can think of is, S3 is where require lot of access control (like fine grain, grouping, etc..) and having a policy created specific S3 can ease management, and take less back-end resources?
S3 authorizes requests by testing all applicable authorities, in the "context" of user, bucket, and object.
See How Amazon S3 Authorizes a Request.
This document is confusing on first read, but it does give a better sense of what's happening with the multiple policies.
A few points to keep in mind:
Users do not own buckets. Accounts own buckets, and accounts own users.
If a user creates a bucket, the account that owns that user always owns that bucket.
If a user creates an object, the account that owns that user always owns that object -- even if the bucket where the object was created is owned by a different account.
Wait, what?
If my account gives your user permission to create an object in my bucket, you would actually own the object. Unless you give me permission to read it, I can't read it. Since it's in my bucket, and I am paying to store it, I can delete it, but that's absolutely all I can do to that object unless you give me access to it.
So there are three levels of permissions at play -- things users are allowed to do (IAM policies), things accounts allow to be done to their bucket and their objects in that bucket (bucket policies and ACLs) and things accounts allow to be done to objects they own (object ACLs).
The default action is implicit deny, but anything my account has the authority to allow can be allowed by allowing it in any one place, as long as it isn't explicitly denied, elsewhere. Explicit deny will always deny, without exception.
Implications of the model:
my user, my bucket, my object requires only one grant; access can be granted in any of the three places and only needs to be granted in one place, because my account owns all the resources... so I can do this in IAM policy, or bucket policy, or on the object.
my user, your bucket requires two grants -- I allow my user in IAM policy, and you must allow my user in yout bucket policy. I don't have authority to do things to your bucket without your consent, and you don't have authority to allow my user to do things without my consent.
it is possible to make my object in my bucket publicly readable via either the object ACL or via bucket policy, but not IAM policy, because IAM policies apply to users, and "everybody" is not one of my IAM users.
So, S3 needs multiple sources of grants becase of the permissions model. If you aren't doing anything cross-account, some of this would not be obvious since you would be unaware of some of the possible combinations.
My preference is for my bucket policies to require little attention. Users are given access in IAM, public objects are made public at the object level (you can do this in bucket policy, but I prefer it to be explicit at the object level), and so bucket policies have limited purpose -- sometimes there are bucket policy rules that deny access for all IP addresses except a list, usually the bucket policy denies uploads without AES-256 (so you can't "forget" to encrypt objects), and sometimes there are origin access identity rules for interoperating with CloudFront... but I do very little customization of bucket policies, because that's my design philosophy.
There are various reasons why there is IAM permission policy and resource-based policy such as S3 bucket policy.
Let's say you have a S3 bucket and you want to grant access to other account. It is not possible only using IAM policy. Hence you need the bucket policy to include the account or IAM entity as Principal.
Also, you cannot use Principal in the IAM permission policy since when you attach the policy to an IAM user, when the user makes request, it becomes the Principal.
Please have a look into the following for more details:
http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Principal
http://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-overview.html