Can I directly apply GCP's IAM policy to a Resource - google-cloud-platform

I am reading about GCP's IAM policy over here. Now consider this Resource hierarchy.
Let's say I want to give start instance permission (compute.instances.start) of "instance_a" to abc#gcp.com and start instance permission of "instance_b" to xyz#gcp.com. Clearly I cannot create a IAM policy (based on the IAM policy object example mentioned in the article) at "example-test" folder because it will not give me the granularity I am looking for.
Is it possible to achieve this level of granularity in GCP?

The permissions are inherited from the top layer (Organisation) to the lower layer (the resource, in your example the VM). So, if you grant a permission at the project level (Example-test), the permission are inherited in all the resources belonging to the project (instance_A and instance_B).
Thereby, you can't (easily) achieve what your want.
But in fact, you have the possibility to add conditions on the IAM role. You can add condition on the resource name or the resource tag for example to allow or disallow the access for a user or another one.
But use the condition wisely, it could become a nightmare to debug the IAM permission issues if you mix several level (in the hierarchy) of permission definition and different conditions. Try to keep the things homogenous and as simple as possible.

Related

AWS Admin has no EC2 Describe operations

I have an IAM role with AdministratorAccess, but when I upload a custom template to AWS CloudFormation, I get the following error:
Operation failed, ComputeEnvironment went INVALID with error:
CLIENT_ERROR - You are not authorized to call EC2 Describe operations.
It is required to perform CreateLaunchConfiguration operation.
All the other resources seem to complete successfully, so I'm not sure if there is some sort of role delegation taking place?
It is possible that you are affected by Service Control Policies (SCPs) or by Permission Boundaries or even other policy types.
Identity-based policies
Resource-based policies
Permissions boundaries
Organization SCPs
Access control lists
Session policies
Regarding SCPs:
An SCP restricts permissions for IAM users and roles in member
accounts, including the member account's root user. Any account has
only those permissions permitted by every parent above it. If a
permission is blocked at any level above the account, either
implicitly (by not being included in an Allow policy statement) or
explicitly (by being included in a Deny policy statement), a user or
role in the affected account can't use that permission, even if the
account administrator attaches the AdministratorAccess IAM policy with
/ permissions to the user.
Also see How to use service control policies to set permission guardrails across accounts in your AWS Organization
As this article states,
The member accounts of an AWS Organization are unable to see the SCPs
that have been applied to them. Further, when actions are denied,
there is no way to know whether that is due to an IAM policy, an SCP,
or something else (ex. session policy, IAM boundary, resource policy).
This means there will be no indication in the error message from an
API call or in the CloudTrail log to show what denied the call. This
can make debugging issues difficult.
This article has some useful diagrams that show the different things that could be affecting/limiting the access.
I've been working on this problem for four days and finally came up with a solution that I think resolves it.
There are two kinds of CDK bootstrap: legacy and modern. Legacy is the default.
There appears to be a bug in the legacy bootstrap that affects some accounts and not others. I was able to verify that the same code worked for me on one account but produced the exact symptoms of this problem on a newly created account.
If you are not stuck with legacy bootstrap for some reason, just convert over to modern bootstrap. That should make the error (and the reason for the error) go away.
The instructions are here: https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html
Basically, you have to set an environment variable before you call cdk bootstrap, and then you have to change either some code or some configuration in your project.
It is not intuitive that this feature exists in the CDK at all, nor that it would be the fix for this particular problem. However, it cleared it up for me. Maybe it will do the same for others.

Is there a way to 'remove' some actions from a managed policy using another managed policy for AWS IAM

I'm currently working on IAM and Access and i'm switching from Roles to Permission Set (to use AWS SSO). I have many custom managed policies, that I can't use with Permission Sets now, so I'm using AWS managed policies such as: PowerUserAccess, ViewOnlyAccess etc.
Some of them are pretty close to what i need but have a bit too many actions. Let's take the PowerUserAccess example.
PowerUserAccess gives all GuardDuty action. I want to block all write actions.
The perfect AWS managed policy for that is: GuardDutyReadOnlyAccess.
Is there an easy way to do that "substraction"?
PowerUserAccess - "Not"GuardDutyReadOnly?
such as:
ManagedPolicies:
- arn:....:PowerUserAccess
- arn:....:PowerUserAccess - 'not' arn:....:GuarddutyReadOnlyAccess
Or do i have to do an inline policy and reverting the GuarddutyPolicy? I would like to avoid Inline policies if possible.
Thanks!
It doesn't have to be inline, but you will have to created another policy. In your case you probably want to create a customer managed policy that denies the guard duty access, and attach that to the users (or even better, to the groups).
Be aware, there is a subtle side effect of doing a deny. If a deny exist it always wins, so if you decide you want to single out a user and grant him/her access to guard duty you'd have to be sure that the deny policy is NOT attached to that user. You can't just give them another policy that includes access.

Is it bad to use roles/storage.legacyBucketWriter to read/write in a bucket?

I find myself using a lot roles/storage.legacyBucketWriter which has the following permissions:
storage.buckets.get
storage.objects.create
storage.objects.delete
storage.objects.list
May be it's okay but it feels odd to me to be using a role with legacy in its name ...
I don't want to create a custom role either because it seems overkill since there is this role fitting the need.
And yes there is a role roles/storage.objectAdmin but it lacks the storage.buckets.get permission.
What do you think ?
Remember legacy roles are related to primitive roles on GCP, It is 'Legacy' because it exactly matches the pre-IAM permissions granted via the legacy role on an Object. It all depends on your use case, the best recommended practice is to follow the principle of least privilege.
Keep in mind as is mentioned at the official documentation:
Legacy Bucket IAM roles work in tandem with bucket ACLs: when you add
or remove a Legacy Bucket role, the ACLs associated with the bucket
reflect your changes.
Also, consider the scope of the read/write legacy roles as is described on this table.
Finally take a look at the section of Security, ACLs, and access control for follow the best practices recommended for Cloud Storage service.
Consider using the Storage Admin Role (roles/storage.admin) instead of a legacy role. This role grants both storage.buckets.* and storage.objects.* permissions, making it a suitable match for your requirements.
According to the Storage Admin Role description, this role
grants full control over buckets and objects. When applied to an individual bucket, control is limited to that specific bucket and its associated objects.
This role is particularly useful when utilizing the gsutil rsync command targeting a bucket.

AWS S3 bucket access for role/users- Select Role Type

My intention is simple- to create a role that I can assign to a standard user of my AWS account so that they can read/write to one of my S3 buckets.
I've created a policy to apply to the role and I'm happy with that bit.
The thing I'm a bit confused about is the "Select Role type" screen in the management console (see attached image). I can't work out what I'm supposed to choose at this stage as none of the descriptions seem to apply to the simple thing I'm trying to achieve.
Does anyone have any idea?
I think you are on the wrong path here. Roles are not ACLs for users, but for systems and services.
See: IAM Roles
If you want to grant a user access to some AWS resources you should have a look at the policy section. Either use a pre-build (like AmazonS3ReadOnlyAccess or AmazonS3FullAccess) or define a policy on your own.
You can then assign this policy to a user. If you want to manage multiple users this way, you can also use groups to assign policies to users.

Is it possible to create an IAM Policy that allows the the creation of only limited IAM Permissions?

As part of the process for onboarding new customers, I need to create an S3 bucket, create a new user, and grant that user permissions to get, list, and put to that bucket. I'd like to automate this process, which means that I need to create a "Provisioning" policy that grants a service only the permissions needed to do these things.
It seems pretty straightforward to use String Conditions in my Provisioning Policy to require the names of Users and Buckets start with a certain prefix. However, the PutUserPolicy seems to just take a text blob as its argument. I'd prefer to limit my Provisioning Policy to only be able to create Policies that grant the specific permissions that I need here; ideally only being able to grant users who names match a pattern the ability to get, list, and put to buckets who names match a pattern. (If this Policy is somehow hijacked, I'd prefer to limit their ability to create a user and grant it all privileges.)
Is there any way to get this level of fine-grained control?
No. IAM doesn't go into this much detail. You can't say "Only allow a user to create a policy with these permissions".
You would need to design your system in such a way that this policy can't be hijacked. Create a template policy and just make sure a new user can't inject anything into the inputs.
Also, on a separate note, it would be considered much better practice to use one bucket and give each user a folder inside that bucket. You can still control permissions to a key in a bucket. See the blog for more on this.