AWS IAM policy permissions clash issue - amazon-web-services

I have tried to create a policy that will prevent the deregistration of AMIs, unless the AMIs have the appropriate "delete this" tag. When I run the IAM policy simulator, the policy doesn't seem to work and the AMIs are allowed to be deregistered, because users already are associated with policies that are more permissive than my new policy.
Is it possible to make my custom policy take priority over other policies? Or do I have to create new policies that explicitly do not have the Deregister AMI permission?

The following IAM policy will deny deregistration of an AMI (just replace with your concrete resource ARN) when said AMI does not have the "delete" tag or that tag's value is not "yes". This works regardless of any possible Allow permissions that the calling identity might have.
This is because permission statements with "Deny" action always take precedence over any Allow permissions.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": "ec2:DeregisterImage",
"Resource": "arn:aws:ec2:*::image/*",
"Condition": {
"StringNotEquals": {
"aws:ResourceTag/delete": "yes"
}
}
}
]
}
Read this page for the detailed algorithm IAM uses to evaluate permissions: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html

Related

AWS IAM assuming same role with session tag for tenant isolation

I am working on a serverless app powered by API gateway and AWS lambda. Each lambda has a separate role for least privilege access. For tenant isolation, I am working on ABAC and IAM
Example of the role that provides get object access to s3 bucket having <TenantID> as the prefix.
Role Name: test-role
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
],
"Resource": "arn:aws:s3:::test-bucket/${aws:PrincipalTag/TenantID}/*"
},
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole",
],
// Same role ARN: Ability to assume itself
"Resource": "arn:aws:iam::<aws-account-d>:role/test-role"
}
]
}
I am assuming the same role in lambda but the the session tag as
const credentials = await sts.assumeRole({
RoleSessionName: 'hello-world',
Tags: [{
Key: 'TenantID',
Value: 'tenant-1',
}],
RoleArn: 'arn:aws:iam::<aws-account-d>:role/test-role'
}).promise();
I am trying to achieve ABAC with a single role instead of two(one role with just assuming role permission, another role with actual s3 permission) so that it would be easier to manage the roles and also won't reach the hard limit of 5000.
Is it a good practice to do so, or does this approach has security vulnerability?
It should work, but feels a bit strange to re-use the role like this. It would make more sense to me to have a role for the lambda function, and a role for the s3 access that the lambda function uses (for a total of two roles).
Also make sure that you're not relying on user input for the TenantID value in your code, because it could be abused to access another tenant's objects.
TLDR: I would not advise you to do this.
Ability to assume itself
I think there is some confusion here. The JSON document is a policy, not a role. A policy in AWS is a security statement of who has access to what under what conditions. A role is just an abstraction of a "who".
As far as I understand the question, you don't need two roles to do what you need to do. But you will likely need two policies.
There are two types of policies in AWS, of interest to this question: Identity based policies and Resource Based policies:
Identity-based policies are attached to some principal, which could be a role.
Resource-based policies are attached to a resource - which also could be a role!
A common use case of roles & policies is for permission delegation. In this case, we have:
A Role, that other principals can assume, maybe temporarily
A trust policy, which controls who can assume the role, under what conditions, and what actions they can take in assuming it. The trust policy is a special case of a resource policy, where the resource is the role itself.
A permissions policy, which is granted to anyone who assumes the role. This is a special case of an identity policy, which is granted based on the assumption of a role.
Key point: both policies are associated to the same role. There is one role, two policies.
Now, let's take a look at your policy. Clearly, it's trying to be two things at once: both a permissions policy and a trust policy for the role in question.
This part of it is trying to be the trust policy:
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole",
],
// Same role ARN: Ability to assume itself
"Resource": "arn:aws:iam::<aws-account-d>:role/test-role"
}
Since the "Principal" section is missing, looks like it's allowing anyone to assume this role. Which looks a bit dodgy to me, especially since one of your stated goals was "least privilege access".
This part is trying to be the permissions policy:
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
],
"Resource": "arn:aws:s3:::test-bucket/${aws:PrincipalTag/TenantID}/*"
},
This doesn't need a "Principal" section, because it's an identity policy.
Presumably you're resuing that policy as both the trust policy and the permissions policy for the given role. Seems like you want to avoid hitting the policy (not role) maximum quota limit of 5000 defined here:
Customer managed policies in an AWS account
Even if somehow it worked, it doesn't make sense and I wouldn't do it. For example, think about the trust policy. The trust policy is supposed to be a resource-based policy attached to the role. The role is the resource. So specifying a "Resource" in the policy doesn't make sense, like so:
"Resource": "arn:aws:s3:::test-bucket/${aws:PrincipalTag/TenantID}/*"
},
Even worse is the inclusion of this in the trust policy:
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
],
"Resource": "arn:aws:s3:::test-bucket/${aws:PrincipalTag/TenantID}/*"
},
What does that even mean?
Perhaps I'm misunderstanding the question, but from what I understand my advice would be:
Keep your one role - that's OK
Create two separate policies: a trust policy & a permissions policy
Consider adding a "Principal" element to the trust policy
Attach the trust & permissions policies to the role appropriately
Explore other avenues to avoid exceeding the 5000 policy limit

SageMaker Studio domain creation fails due to KMS permissions

Question
Please help understand the cause and solution for the problem.
Problem
SageMaker Studio domain creation fails due to KMS permissions. The IAM Role specified to the SageMaker arn:aws:iam::316725000538:role/SageMaker has the permissions for KMS required as specified in https://docs.aws.amazon.com/sagemaker/latest/dg/api-permissions-reference.html.
Domain creation failed
Unable to create Amazon EFS for domain 'd-1dq5c9rpkswy' because you don't have permissions to use the KMS key 'arn:aws:kms:us-east-2:316725000538:key/1e2dbf9d-daa0-408d-a290-1633b615c54f'. See https://docs.aws.amazon.com/sagemaker/latest/dg/api-permissions-reference.html for required permissions for CreateDomain action.
tells the IAM permissions
IAM Permission for CreateDomain action
Amazon SageMaker API Permissions: Actions, Permissions, and Resources Reference
The IAM permission required for the CreateDomain action have been attached to the IAM role.
I had the same problem when trying to use the aws/s3 key. I created my own Customer Managed Key (CMK) and it worked just fine.
I think it's related to the AWS assigned policy on the aws/s3 key.
This part:
"Condition": {
"StringEquals": {
"kms:CallerAccount": "120455730103",
"kms:ViaService": "s3.us-east-1.amazonaws.com"
}
I don't think SageMaker meets the kms:ViaService condition.
Apart from SageMakerFullAccess we need to create a new policy and attach that to your user.
Create a new policy with below json -
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"sagemaker:CreateUserProfile",
"sagemaker:CreateModel",
"sagemaker:CreateLabelingJob",
"sagemaker:CreateFlowDefinition",
"sagemaker:CreateDomain",
"sagemaker:CreateAutoMLJob",
"sagemaker:CreateProcessingJob",
"sagemaker:CreateTrainingJob",
"sagemaker:CreateNotebookInstance",
"sagemaker:CreateCompilationJob",
"sagemaker:CreateImage",
"sagemaker:CreateMonitoringSchedule",
"sagemaker:RenderUiTemplate",
"sagemaker:UpdateImage",
"sagemaker:CreateHyperParameterTuningJob"
],
"Resource": "*"
}
]
}

What is the purpose of 'resource' in an AWS resource policy?

As per title, what is the purpose of having the resource field when defining a resource policy when the resource policy is already going to be applied to a particular resource.
For example, in this aws tutorial, the following policy is defined an attached to a queue. What is the purpose of the resource field?
{
"Version": "2008-10-17",
"Id": "example-ID",
"Statement": [
{
"Sid": "example-statement-ID",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"SQS:SendMessage"
],
"Resource": "arn:aws:sqs:REGION:ACCOUNT-ID:QUEUENAMEHERE",
"Condition": {
"ArnLike": { "aws:SourceArn": "arn:aws:s3:*:*:bucket-name" }
}
}
]
}
S3 is a good example of where you need to include the resource statement in the policy. Let's say you want to have a upload location on S3 bucket.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"Upload",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:PutObject"],
"Resource":["arn:aws:s3:::examplebucket/uploads/*"]
}
]
}
In these cases you really don't want to default the Resource to the bucket as it could accidentally cause global access. It is better to make sure the user clearly understands what access is being allowed or denied.
But why make it required for resource policies where it isn't need like SQS? For this let's dive into how resource policies are used.
You can grant access to a resources 2 ways:
Identity based policies for IAM principals (users and roles).
Resource based policies
The important part to understand is how are resource polices used? Resource policies are actually used by IAM in the policy evaluation logic for authorization. To put it another way, resources are not responsible for the actual authorization that is left to IAM (Identity and Access Management).
Since IAM requires that every policy statement have a Resource or NotResource this means the service would need to add the resource when sending it to IAM if it was missing. So let us look at the implications from a design perspective of having the service add the resource if it is missing.
The service no longer would need to just verify the policy is correct.
If the resource is missing from the statement the service would need to update the policy before sending it to IAM.
There is now the potential for two different versions of a resource policy. The one the user created for editing and the one sent to IAM.
It increases the potential for user error and accidentally opening up access by attaching a policy to the wrong resource. If we modify the policy statement in the question drop the resource and condition statement we have a pretty open policy. This could easily be attached to the wrong resource especially from the CLI or terraform.
{
"Sid": "example-statement-ID",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"*"
]
}
Note I answered this from a general design perspective based on my understanding of how AWS implements access management. How AWS implemented the system might be a little different but I doubt it because policy evaluation logic really needs to be optimized for performance so it's better do to that in in one service, IAM, instead of in each service.
Hope that helps.
Extra reading if you are interested in the details of the Policy Evaluation Logic.
You can deny access 6 ways:
Identity Policy
Resource policies
Organizational Polices if your account is part of an organization
IAM permission boundaries if set
Session Assumed Policy if used
Implicitly if there was no allow policy
Here is the complete IAM policy evaluation logic workflow.
There is a Policy as you defined.
Policy applied resource : A, I don't know where you will apply this.
The resource in the policy : B, arn:aws:sqs:REGION:ACCOUNT-ID:QUEUENAMEHERE
Once you apply the polity to some service like ec2 instance that is A, then the instance only can do SQS:SendMessage through the resource B. A and B are totally different.
If you want to restrict the permission for the resource A that shouldn't access to other resources but can only access to the defined resources, then you have to define the resource such as B in the policy.
Your policy is only valid for that resource B and this is not the resource what you applied A.

How to lockdown S3 bucket to specific users and IAM role(s)

In our environment, all IAM user accounts are assigned a customer-managed policy that grants read-only access to a lot of AWS services. Here's what I want to do:
Migrate a sql server 2012 express database from on-prem to a RDS instance
Limit access to the S3 bucket containing the database files
Here's the requirements according to AWS:
A S3 bucket to store the .bak database file
A role with access to the bucket
SQLSERVER_BACKUP_RESTORE option attached to RDS instance
So far, I've done the following:
Created a bucket under the name "test-bucket" (and uploaded the .bak file here)
Created a role under the name "rds-s3-role"
Created a policy under the name "rds-s3-policy" with these settings:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::test-bucket/"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObjectMetaData",
"s3:GetObject",
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
Assigned the policy to the role
Gave the AssumeRole permissions to the RDS service to assume the role created above
Created a new option group in RDS with the SQLSERVER_BACKUP_RESTORE option and linked it to my RDS instance
With no restrictions on my S3 bucket, I can perform the restore just fine; however, I can't find a solid way of restricting access to the bucket without hindering the RDS service from doing the restore.
In terms of my attempts to restrict access to the S3 bucket, I found a few posts online recommending using an explicit Deny statement to deny access to all types of principals and grant access based on some conditional statements.
Here's the contents of my bucket policy:
{
"Version": "2012-10-17",
"Id": "Policy1486769843194",
"Statement": [
{
"Sid": "Stmt1486769841856",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::test-bucket",
"arn:aws:s3:::test-bucket/*"
],
"Condition": {
"StringNotLike": {
"aws:userid": [
"<root_id>",
"<user1_userid>",
"<user2_userid>",
"<user3_userid>",
"<role_roleid>:*"
]
}
}
}
]
}
I can confirm the bucket policy does restrict access to only the IAM users that I specified, but I am not sure how it treats IAM roles. I used the :* syntax above per a document I found on the aws forums where the author stated the ":*" is a catch-all for every principal that assumes the specified role.
The only thing I'm having a problem with is, with this bucket policy in place, when I attempt to do the database restore, I get an access denied error. Has anyone ever done something like this? I've been going at it all day and haven't been able to find a working solution.
The following, admittedly, is guesswork... but reading between the lines of the somewhat difficult to navigate IAM documentation and elsewhere, and taking into account the way I originally interpreted it (incorrectly), I suspect that you are using the role's name rather than the role's ID in the policy.
Role IDs look similar to AWSAccessKeyIds except that they begin with AROA....
For the given role, find RoleId in the output from this:
$ aws iam get-role --role-name ROLE-NAME
https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/
Use caution when creating a broad Deny policy. You can end up denying s3:PutBucketPolicy to yourself, which leaves you in a situation where your policy prevents you from changing the policy... in which case, your only recourse is presumably to persuade AWS support to remove the bucket policy. A safer configuration would be to use this to deny only the object-level permissions.

How can I give only specific AWS "iam : putUserPolicy" permissions?

Use case: In our application we need to give iam : putUserPolicy permissions to IAM entities. That is trivial. We can assign the policy mentioned below to the IAM entity to which we want to give iam : putUserPolicy permission
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"iam : putUserPolicy"
],
"Resource":"*"
}
]
}
Let's say we have another requirement and assign putUserPolicy to IAM user U1. This means that now U1 can assign ANY policy to ANY IAM user. The second "ANY" can be avoided by changing "Resource":"*" to "Resource":"user-arn", but how do we deal with the first ANY?
Is there a way to give "iam : putUserPolicy" permission such that putting only "iam : CreateUser" permission is allowed? Or perhaps only "iam : CreateUser" is blocked and putting rest all policies is allowed?
I went through the AWS documentation and I found conditions kind of helpful but I could not find any IAM service-specific keys and values though I did find some for EC2 and SNS.
As an example we can assign the following policy:
{
"Version":"2012-10-17",
"Statement":[{
"Effect":"Allow",
"Action":["s3:ListBucket"],
"Resource":"*",
"Condition":{"StringNotEquals":["s3:prefix":"arn:aws:s3:::BUCKET-NAME/home/"]}
}
]
}
which gives permissions to all other S3 folders and buckets except the home folder in a particular bucket.
Can we do something like this?
{
"Version":"2012-10-17",
"Statement":[{
"Effect":"Allow",
"Action":["iam:PutUserPolicy"],
"Resource":"*",
"Condition":{"StringNotEquals":["iam:policy-contains":"iam:CreateUser"]}
}
]
}
AWS has just introduced Managed Policies for AWS Identity & Access Management, which provide a fresh approach to sharing and maintaining IAM policies across IAM entities, notably also including Delegating permissions management, see Controlling Access to Managed Policies:
Managed policies give you precise control over how your users can manage policies and manage permissions for others. You can separately control who can create, update, and delete policies, and who can attach and detach policies to and from principal entities (users, groups, and roles). You can also control which policies a user can attach or detach, and to and from which entities. [emphasis mine]
A typical scenario is that you give permissions to an account administrator to create, update, and delete policies. Then, you give permissions to a team leader or other limited administrator to attach and detach these policies [...].
Section Controlling Permissions for Attaching and Detaching Managed Policies provides an Example policy that allows attaching only specific managed policies to only specific groups or roles, which conceptually allows you to achieve what you are looking for:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": [
"iam:AttachGroupPolicy",
"iam:AttachRolePolicy"
],
"Resource": [
"arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:group/TEAM-A/*",
"arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:role/TEAM-A/*"
],
"Condition": {"ArnLike":
{"iam:PolicyArn": "arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:policy/TEAM-A/*"}
}
}
}