What is the purpose of 'resource' in an AWS resource policy? - amazon-web-services

As per title, what is the purpose of having the resource field when defining a resource policy when the resource policy is already going to be applied to a particular resource.
For example, in this aws tutorial, the following policy is defined an attached to a queue. What is the purpose of the resource field?
{
"Version": "2008-10-17",
"Id": "example-ID",
"Statement": [
{
"Sid": "example-statement-ID",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"SQS:SendMessage"
],
"Resource": "arn:aws:sqs:REGION:ACCOUNT-ID:QUEUENAMEHERE",
"Condition": {
"ArnLike": { "aws:SourceArn": "arn:aws:s3:*:*:bucket-name" }
}
}
]
}

S3 is a good example of where you need to include the resource statement in the policy. Let's say you want to have a upload location on S3 bucket.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"Upload",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:PutObject"],
"Resource":["arn:aws:s3:::examplebucket/uploads/*"]
}
]
}
In these cases you really don't want to default the Resource to the bucket as it could accidentally cause global access. It is better to make sure the user clearly understands what access is being allowed or denied.
But why make it required for resource policies where it isn't need like SQS? For this let's dive into how resource policies are used.
You can grant access to a resources 2 ways:
Identity based policies for IAM principals (users and roles).
Resource based policies
The important part to understand is how are resource polices used? Resource policies are actually used by IAM in the policy evaluation logic for authorization. To put it another way, resources are not responsible for the actual authorization that is left to IAM (Identity and Access Management).
Since IAM requires that every policy statement have a Resource or NotResource this means the service would need to add the resource when sending it to IAM if it was missing. So let us look at the implications from a design perspective of having the service add the resource if it is missing.
The service no longer would need to just verify the policy is correct.
If the resource is missing from the statement the service would need to update the policy before sending it to IAM.
There is now the potential for two different versions of a resource policy. The one the user created for editing and the one sent to IAM.
It increases the potential for user error and accidentally opening up access by attaching a policy to the wrong resource. If we modify the policy statement in the question drop the resource and condition statement we have a pretty open policy. This could easily be attached to the wrong resource especially from the CLI or terraform.
{
"Sid": "example-statement-ID",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"*"
]
}
Note I answered this from a general design perspective based on my understanding of how AWS implements access management. How AWS implemented the system might be a little different but I doubt it because policy evaluation logic really needs to be optimized for performance so it's better do to that in in one service, IAM, instead of in each service.
Hope that helps.
Extra reading if you are interested in the details of the Policy Evaluation Logic.
You can deny access 6 ways:
Identity Policy
Resource policies
Organizational Polices if your account is part of an organization
IAM permission boundaries if set
Session Assumed Policy if used
Implicitly if there was no allow policy
Here is the complete IAM policy evaluation logic workflow.

There is a Policy as you defined.
Policy applied resource : A, I don't know where you will apply this.
The resource in the policy : B, arn:aws:sqs:REGION:ACCOUNT-ID:QUEUENAMEHERE
Once you apply the polity to some service like ec2 instance that is A, then the instance only can do SQS:SendMessage through the resource B. A and B are totally different.
If you want to restrict the permission for the resource A that shouldn't access to other resources but can only access to the defined resources, then you have to define the resource such as B in the policy.
Your policy is only valid for that resource B and this is not the resource what you applied A.

Related

AWS IAM assuming same role with session tag for tenant isolation

I am working on a serverless app powered by API gateway and AWS lambda. Each lambda has a separate role for least privilege access. For tenant isolation, I am working on ABAC and IAM
Example of the role that provides get object access to s3 bucket having <TenantID> as the prefix.
Role Name: test-role
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
],
"Resource": "arn:aws:s3:::test-bucket/${aws:PrincipalTag/TenantID}/*"
},
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole",
],
// Same role ARN: Ability to assume itself
"Resource": "arn:aws:iam::<aws-account-d>:role/test-role"
}
]
}
I am assuming the same role in lambda but the the session tag as
const credentials = await sts.assumeRole({
RoleSessionName: 'hello-world',
Tags: [{
Key: 'TenantID',
Value: 'tenant-1',
}],
RoleArn: 'arn:aws:iam::<aws-account-d>:role/test-role'
}).promise();
I am trying to achieve ABAC with a single role instead of two(one role with just assuming role permission, another role with actual s3 permission) so that it would be easier to manage the roles and also won't reach the hard limit of 5000.
Is it a good practice to do so, or does this approach has security vulnerability?
It should work, but feels a bit strange to re-use the role like this. It would make more sense to me to have a role for the lambda function, and a role for the s3 access that the lambda function uses (for a total of two roles).
Also make sure that you're not relying on user input for the TenantID value in your code, because it could be abused to access another tenant's objects.
TLDR: I would not advise you to do this.
Ability to assume itself
I think there is some confusion here. The JSON document is a policy, not a role. A policy in AWS is a security statement of who has access to what under what conditions. A role is just an abstraction of a "who".
As far as I understand the question, you don't need two roles to do what you need to do. But you will likely need two policies.
There are two types of policies in AWS, of interest to this question: Identity based policies and Resource Based policies:
Identity-based policies are attached to some principal, which could be a role.
Resource-based policies are attached to a resource - which also could be a role!
A common use case of roles & policies is for permission delegation. In this case, we have:
A Role, that other principals can assume, maybe temporarily
A trust policy, which controls who can assume the role, under what conditions, and what actions they can take in assuming it. The trust policy is a special case of a resource policy, where the resource is the role itself.
A permissions policy, which is granted to anyone who assumes the role. This is a special case of an identity policy, which is granted based on the assumption of a role.
Key point: both policies are associated to the same role. There is one role, two policies.
Now, let's take a look at your policy. Clearly, it's trying to be two things at once: both a permissions policy and a trust policy for the role in question.
This part of it is trying to be the trust policy:
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole",
],
// Same role ARN: Ability to assume itself
"Resource": "arn:aws:iam::<aws-account-d>:role/test-role"
}
Since the "Principal" section is missing, looks like it's allowing anyone to assume this role. Which looks a bit dodgy to me, especially since one of your stated goals was "least privilege access".
This part is trying to be the permissions policy:
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
],
"Resource": "arn:aws:s3:::test-bucket/${aws:PrincipalTag/TenantID}/*"
},
This doesn't need a "Principal" section, because it's an identity policy.
Presumably you're resuing that policy as both the trust policy and the permissions policy for the given role. Seems like you want to avoid hitting the policy (not role) maximum quota limit of 5000 defined here:
Customer managed policies in an AWS account
Even if somehow it worked, it doesn't make sense and I wouldn't do it. For example, think about the trust policy. The trust policy is supposed to be a resource-based policy attached to the role. The role is the resource. So specifying a "Resource" in the policy doesn't make sense, like so:
"Resource": "arn:aws:s3:::test-bucket/${aws:PrincipalTag/TenantID}/*"
},
Even worse is the inclusion of this in the trust policy:
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
],
"Resource": "arn:aws:s3:::test-bucket/${aws:PrincipalTag/TenantID}/*"
},
What does that even mean?
Perhaps I'm misunderstanding the question, but from what I understand my advice would be:
Keep your one role - that's OK
Create two separate policies: a trust policy & a permissions policy
Consider adding a "Principal" element to the trust policy
Attach the trust & permissions policies to the role appropriately
Explore other avenues to avoid exceeding the 5000 policy limit

Significance of resource in Resource Based Policy

I am trying to understand resource based policy in IAM.
I understand : it is attached to a resource like s3,KMS, secrets manager etc.
My question is what is the significance Resource in a resource based policy.
For example a permission policy for AWS secrets manager(https://aws.amazon.com/premiumsupport/knowledge-center/secrets-manager-resource-policy/)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "secretsmanager:*",
"Principal": {"AWS": "arn:aws:iam::123456789999:user/Mary"},
"Resource": "*"
}
]
}
Here the Resource is * or the resource can be the ARN of the secrets manager. (Is there any other value allowed in this case ? ) For S3 I can think of the root bucket or other prefixes.
So my question is what is the use case for Resource here ? Please let me know if I am reading it wrong.
Thanks in advance.
Looking in the User Guide, you can see:
Resource: which secrets they can access. See Secrets Manager resources.
The wildcard character (*) has different meaning depending on what you attach the policy to:
In a policy attached to a secret, * means the policy applies to this secret.
In a policy attached to an identity, * means the policy applies to all resources, including secrets, in the account.
So in the case where it is attached to the secret, it effectively has no meaning that differs from *, but it is when you attach it to an identity that it becomes more useful. Then you can give differing identities different action permissions on various secrets.
Resource is the resource that the policy refers to. It allows for more fine grained control over policies.
Take an example-
You host several DynamoDB tables, each of which have multiple indexes. You want to grant users in group A access to some of the tables, along with their indexes.
You want to give users in group B access to a single table, but none of the indexes.
And you want to give users in group C access to a single table, along with all 3 of its indexes.
When you specify the resource in the policy for group A
"resource": ["arn::<table-a-arn>/","arn::<table-b-arn>/","arn::<table-b-arn>/index/gsi1"]
The resource policy for group B "resource": "arn::<table-c-arn>/"
And for group C "resource": ["arn::<table-a-arn>/","arn::<table-a-arn>/index/*"]
Another use case if for explicit denies. An explicit deny always overrides an implicit allow. If you grant full access to EC2 in an account with a policy with EC2 permissions with "resource": * but there is a single instance that you want to limit access to by the entity to which you are applying the policy you would also add a deny statement to the policy with "resource": <some-super-private-instance>

Allow developers to create AWS Lambda or SAM without granting Administrator access

It seems to be impossible to allow developers to create Lambdas and create or maintain SAM Applications in AWS without essentially having AdministratorAccess policies attached to their developer's role. AWS documents a suggested IAM setup where everyone is simply Administrator, or only has IAMFullAccess, or a even more specific set of permissions containing "iam:AttachRolePolicy" which all boils down to still having enough access to grant the AdministratorAccess permission to anyone at will with just 1 API call.
Besides creating a new AWS Account for each SAM or Lambda deployment there doesn't seem to be any secure way to manage this, but I really hope I'm missing something obvious. Perhaps someone knows of a combination of tags, permission boundaries and IAM Paths that would alleviate this?
The documentation I refer to: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-permissions.html which opens with:
There are three main options for granting a user permission to manage
serverless applications. Each option provides users with different
levels of access control.
Grant administrator permissions.
Attach necessary AWS managed policies.
Grant specific AWS Identity and Access Management (IAM) permissions.
Further down, a sample application is used to specify slightly more specific permissions:
For example, the following AWS managed policies are sufficient to
deploy the sample Hello World application:
AWSCloudFormationFullAccess
IAMFullAccess
AWSLambda_FullAccess
AmazonAPIGatewayAdministrator
AmazonS3FullAccess
AmazonEC2ContainerRegistryFullAccess
And at the end of the document an AWS IAM Policy document describes a set of permissions which is rather lengthy, but contains the mentioned "iam:AttachRolePolicy" permission with a wildcard resource for roles it may be applied on.
AWS has a PowerUserAccess managed policy which is meant for developers. It gives them access to most of the services and no access to admin activities including IAM, Organization and Account management.
You can create an IAM Group for developers (Say Developers) and add the managed policy PowerUserAccess to the group. Add developers to this group.
For deploying with SAM, the developers would need a few IAM permissions to create roles, tag roles. While rolling back a CloudFormation Stack, they may need a few delete permissions. While allowing the developers to create new roles for Lambda functions, you need to ensure they don't escalate privileges by using permissions boundary. A good starting point again would be to set the permissions boundary to PowerUserAccess. (until you figure out what is the right level of permissions)
Create a Policy something like this
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadRole",
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:GetRolePolicy",
"iam:ListRoleTags"
],
"Resource": "arn:aws:iam::ReplaceWithYourAWSAccountNumber:role/*FunctionRole*"
},
{
"Sid": "TagRole",
"Effect": "Allow",
"Action": [
"iam:UntagRole",
"iam:TagRole"
],
"Resource": "arn:aws:iam::ReplaceWithYourAWSAccountNumber:role/*FunctionRole*"
},
{
"Sid": "WriteRole",
"Effect": "Allow",
"Action": [
"iam:DeleteRole",
"iam:DeleteRolePolicy",
"iam:AttachRolePolicy",
"iam:PutRolePolicy",
"iam:PassRole",
"iam:DetachRolePolicy"
],
"Resource": "arn:aws:iam::ReplaceWithYourAWSAccountNumber:role/*FunctionRole*"
},
{
"Sid": "CreateRoleWithPermissionsBoundry",
"Effect": "Allow",
"Action": [
"iam:CreateRole"
],
"Resource": "arn:aws:iam::ReplaceWithYourAWSAccountNumber:role/*FunctionRole*",
"Condition": {
"StringEquals": {
"iam:PermissionsBoundary": "arn:aws:iam::aws:policy/PowerUserAccess"
}
}
}
]
}
Note: It assumes the Lambda function names in the SAM template contains the word Function in them. (Replace the AWS Account Number in the ARNs).
Now you can attach the above policy to the Developers IAM Group. (This would give the SAM deployment permissions to all the developers)
Or you can create another IAM Group for SAM developers (Say SAM-Developers) and attach the above policy to the SAM-Developers group. Now add the appropriate developers (who need to deploy using SAM) to this new IAM group (SAM-Developers).
Define the Permissions Boundary in the SAM templates as well.
Here is an example PermissionsBoundary in SAM template.
Globals:
Function:
Timeout: 15
PermissionsBoundary: arn:aws:iam::aws:policy/PowerUserAccess
With that, the developers should be able to deploy using SAM provided they do not have any restrictive permission boundary.
You can set the permission boundary to AdministratorAccess for the developers or create a new Policy which combines the permissions of PowerUserAccess and the above defined policy for 'SAM' deployments. Then set this new Policy as the permission boundary for the developers.
This solution is for reference and you can build upon this. The PowerUserAccess has been set as the permissions boundary for the Lambda function roles. The PowerUserAccess is too permissive and you should further work on this to find out the right level of permission for your developers and the Lambda functions.
Sidenote: You can use this policy to allow the users to manage their own credentials.

AWS IAM policy permissions clash issue

I have tried to create a policy that will prevent the deregistration of AMIs, unless the AMIs have the appropriate "delete this" tag. When I run the IAM policy simulator, the policy doesn't seem to work and the AMIs are allowed to be deregistered, because users already are associated with policies that are more permissive than my new policy.
Is it possible to make my custom policy take priority over other policies? Or do I have to create new policies that explicitly do not have the Deregister AMI permission?
The following IAM policy will deny deregistration of an AMI (just replace with your concrete resource ARN) when said AMI does not have the "delete" tag or that tag's value is not "yes". This works regardless of any possible Allow permissions that the calling identity might have.
This is because permission statements with "Deny" action always take precedence over any Allow permissions.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": "ec2:DeregisterImage",
"Resource": "arn:aws:ec2:*::image/*",
"Condition": {
"StringNotEquals": {
"aws:ResourceTag/delete": "yes"
}
}
}
]
}
Read this page for the detailed algorithm IAM uses to evaluate permissions: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html

One IAM Role across multiple AWS accounts

For security reasons, we have a pre-prod and a prod AWS account. We're now beginning to use IAM Roles for S3 access to js/css files through django-storage / boto.
While this is working correctly on a per account basis, now a need has risen where the QA instance needs to access one S3 bucket on a the prod account.
Is there a way to have one IAM role that can grant access to the pre-prod And prod S3 buckets? As I'm writing it seems impossible, but it never hearts to ask!
Here's the AWS doc on this: http://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html
Essentially, you have to delegate permissions to one account from the other account using the Principal block of your Bucket's IAM policy, and then set up your IAM user in the second account as normal.
Example bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Account-ID>:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<bucket-name>"
]
}
]
}
This works well for read-only access, but there can be issues with write access. Primarily, the account writing the object will still be the owner of that object. When dealing with Write permissions, you'll usually want to make sure the account owning the bucket still has the ability to access objects written by the other account, which requires the object to be written with a particular header: x-amz-grant-full-control
You can set up your bucket policy so that the bucket will not accept cross-account objects that do not supply this header. There's an example of that at the bottom of this page: http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html (under "Granting Cross-Account Permissions to Upload Objects While Ensuring the Bucket Owner Has Full Control")
This makes use of a conditional Deny clause in the bucket policy, like so:
{
"Sid":"112",
"Effect":"Deny",
"Principal":{"AWS":"1111111111" },
"Action":"s3:PutObject",
"Resource":"arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotEquals": {"s3:x-amz-grant-full-control":["emailAddress=xyz#amazon.com"]}
}
}
I generally avoid cross-account object writes, myself...they are quite fiddly to set up.