In our team, we have both our production and development stack in the same AWS account. These stacks are distinguished by their resource name. For example, we have a S3 bucket example-dev-bucket and example-prod-bucket . Al these resources are thus also distinguishable by their arn, e.g. arn:aws:s3:::example-dev-bucket and arn:aws:s3:::example-prod-bucket . Now I want to create an IAM role that grants access to all resources except for production resources.
To grant access to all resources is easy, I add a policy with the following statement
Effect: Allow
Action:
- '*'
Resource:
- '*'
After allowing all resources, I want to add a policy to deny the production resources. Only doing this for S3 resources works fine, like below.
Effect: Deny
Action:
- '*'
Resource:
- 'arn:aws:s3:::*-prod-*'
However, doing this for multiple services all at once, does not seem to be valid syntax. I have tried something like *-prod-* and arn:aws:*:*:*:*:*-prod-*.
A possible solution for me is to add each service just like I added the S3 service. However, it's easy to forget services. Rather I would just have a single line that includes all resources that have -prod- in their arn.
You can make use of wildcards in resource names to accomplish this.
For example, if you include the env terms like "Prod" and "Dev" in all your resources, you can create policies including those terms.
This could be a policy for a Dev role for DynamoDB:
{
"Version": "2012-10-17",
"Statement": {
[
"Effect": "Allow",
"Action": "dynamodb:*",
"Resource": "arn:aws:dynamodb:us-east-1:account-id:*Dev*"
],
[
"Effect": "Deny",
"Action": "dynamodb:*",
"Resource": "arn:aws:dynamodb:us-east-1:account-id:*Prod*"
]
}
}
You can do the same for S3 and any other service.
However, it's easy to forget services.
Without using tags, there isn't an easy way to cover all services with one policy. The wildcard can't be used in the place of the resource itself (ex :arn:aws:*)
Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_resource.html#reference_policies_elements_resource_wildcards
Related
I am working on a serverless app powered by API gateway and AWS lambda. Each lambda has a separate role for least privilege access. For tenant isolation, I am working on ABAC and IAM
Example of the role that provides get object access to s3 bucket having <TenantID> as the prefix.
Role Name: test-role
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
],
"Resource": "arn:aws:s3:::test-bucket/${aws:PrincipalTag/TenantID}/*"
},
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole",
],
// Same role ARN: Ability to assume itself
"Resource": "arn:aws:iam::<aws-account-d>:role/test-role"
}
]
}
I am assuming the same role in lambda but the the session tag as
const credentials = await sts.assumeRole({
RoleSessionName: 'hello-world',
Tags: [{
Key: 'TenantID',
Value: 'tenant-1',
}],
RoleArn: 'arn:aws:iam::<aws-account-d>:role/test-role'
}).promise();
I am trying to achieve ABAC with a single role instead of two(one role with just assuming role permission, another role with actual s3 permission) so that it would be easier to manage the roles and also won't reach the hard limit of 5000.
Is it a good practice to do so, or does this approach has security vulnerability?
It should work, but feels a bit strange to re-use the role like this. It would make more sense to me to have a role for the lambda function, and a role for the s3 access that the lambda function uses (for a total of two roles).
Also make sure that you're not relying on user input for the TenantID value in your code, because it could be abused to access another tenant's objects.
TLDR: I would not advise you to do this.
Ability to assume itself
I think there is some confusion here. The JSON document is a policy, not a role. A policy in AWS is a security statement of who has access to what under what conditions. A role is just an abstraction of a "who".
As far as I understand the question, you don't need two roles to do what you need to do. But you will likely need two policies.
There are two types of policies in AWS, of interest to this question: Identity based policies and Resource Based policies:
Identity-based policies are attached to some principal, which could be a role.
Resource-based policies are attached to a resource - which also could be a role!
A common use case of roles & policies is for permission delegation. In this case, we have:
A Role, that other principals can assume, maybe temporarily
A trust policy, which controls who can assume the role, under what conditions, and what actions they can take in assuming it. The trust policy is a special case of a resource policy, where the resource is the role itself.
A permissions policy, which is granted to anyone who assumes the role. This is a special case of an identity policy, which is granted based on the assumption of a role.
Key point: both policies are associated to the same role. There is one role, two policies.
Now, let's take a look at your policy. Clearly, it's trying to be two things at once: both a permissions policy and a trust policy for the role in question.
This part of it is trying to be the trust policy:
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole",
],
// Same role ARN: Ability to assume itself
"Resource": "arn:aws:iam::<aws-account-d>:role/test-role"
}
Since the "Principal" section is missing, looks like it's allowing anyone to assume this role. Which looks a bit dodgy to me, especially since one of your stated goals was "least privilege access".
This part is trying to be the permissions policy:
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
],
"Resource": "arn:aws:s3:::test-bucket/${aws:PrincipalTag/TenantID}/*"
},
This doesn't need a "Principal" section, because it's an identity policy.
Presumably you're resuing that policy as both the trust policy and the permissions policy for the given role. Seems like you want to avoid hitting the policy (not role) maximum quota limit of 5000 defined here:
Customer managed policies in an AWS account
Even if somehow it worked, it doesn't make sense and I wouldn't do it. For example, think about the trust policy. The trust policy is supposed to be a resource-based policy attached to the role. The role is the resource. So specifying a "Resource" in the policy doesn't make sense, like so:
"Resource": "arn:aws:s3:::test-bucket/${aws:PrincipalTag/TenantID}/*"
},
Even worse is the inclusion of this in the trust policy:
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
],
"Resource": "arn:aws:s3:::test-bucket/${aws:PrincipalTag/TenantID}/*"
},
What does that even mean?
Perhaps I'm misunderstanding the question, but from what I understand my advice would be:
Keep your one role - that's OK
Create two separate policies: a trust policy & a permissions policy
Consider adding a "Principal" element to the trust policy
Attach the trust & permissions policies to the role appropriately
Explore other avenues to avoid exceeding the 5000 policy limit
I need some help related to creating AWS policies.
I need a policy linked to an EC2 instance to be able to give only a get-parameters-by-path to a specific parameter in AWS SSM parameter store, without being able to change anything like Delete, Create, etc and should only be able to get the values.
This policy specificity will be given through tags.
Here's my policy I'm trying to use:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["ssm:*"],
"Resource": ["*"]
},
{
"Effect": "Deny",
"Action": [
"ssm:PutParameter",
"ssm:GetParameter",
"ssm:GetParameters",
"ssm:DeleteParameter",
"ssm:GetParameterHistory",
"ssm:DeleteParameters",
"ssm:GetParametersByPath"
],
"Resource": ["*"],
"Condition": {
"StringNotEquals": {
"ssm:resourceTag/env": "development-1"
}
}
}
]
}
Using the AWS Policy Simulator it informs you that when trying to View, Create, Modify, Delete Parameters with "ssm:resourceTag/env": "development-2" a denial message is informed, while other projects with "ssm:resourceTag/env": "development-1" it is possible to modify, view, etc.
However, when tying the same policy to an EC2 instance, the policy blocks any of the actions added in Deny.
EC2 Informed Messages:
/development-1/project-1
aws --region us-east-2 ssm get-parameters-by-path --path /development-1/project-1/ --recursive --with-decryption --output text --query "Parameters[].[Value]"
An error occurred (AccessDeniedException) when calling the GetParametersByPath operation: User: arn:aws:sts::111111111:assumed-role/rule-ec2/i-11111111111 is not authorized to perform: ssm:GetParametersByPath on resource: arn:aws:ssm:us-east-2:11111111111:parameter/development-1/project-1/ with an explicit deny
/development-2/project-2
aws --region us-east-2 ssm get-parameters-by-path --path /development-2/project-2/ --recursive --with-decryption --output text --query "Parameters[].[Value]"
An error occurred (AccessDeniedException) when calling the GetParametersByPath operation: User: arn:aws:sts::11111111111:assumed-role/rule-ec2/i-11111111111 is not authorized to perform: ssm:GetParametersByPath on resource: arn:aws:ssm:us-east-2:11111111111:parameter/development-2/project-2/ with an explicit deny
Tags used:
key=value
/development-1/project-1:
env=development-1
/development-2/project-2:
env=development-2
What am I doing wrong?
You can't set a condition key based on the tag of the parameter for any of the ssm:GetParameter* IAM actions because the API doesn't (currently) support condition keys.
Instead you can restrict by the ARN of the parameter and in general the practice with SSM parameter store is to use a hierarchical path to the parameters to allow for you to restrict access via IAM and then optionally wildcarding where you are happy for things to have anything under that path.
So a common pattern might be to have some structure that looks something like this:
/production/foo-service/database/password
/production/foo-service/bar-service/api-key
/production/bar-service/database/password
/production/bar-service/foo-service/api-key
/development/foo-service/database/password
/development/foo-service/bar-service/api-key
/development/bar-service/database/password
/development/bar-service/foo-service/api-key
Then for your foo-service running in production you give it an IAM role with the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["ssm:GetParameter*"],
"Resource": ["arn:aws:ssm:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:parameter/production/foo-service/*"]
}
]
}
This will allow the foo-service in production to then access both of /production/foo-service/database/password and /production/foo-service/bar-service/api-key but won't allow access to any of the other parameters or the ability to modify or delete the parameters.
As mentioned in #Marcin's answer, you don't need to add a deny statement to your IAM policy here because the default for IAM is to deny unless things have explicitly been given an allow statement.
The exception to this would be if you are doing very complex things where you want to give mostly access to a wide range of things but then block a small subset of things. This blog post talks about how the default ReadOnlyAccess policy is too permissive for their organisation so they restrict access with a deny statement on things they don't want to give such open access to. They could also go the opposite way and never use that AWS managed policy and instead have to maintain a very broad set of actions across a lot of services themselves which might be considered safer but also potentially a lot of work.
You are using Deny to ssm:GetParametersByPath, so it will always be denied. Deny always takes a priority over any allow.
But in your case, since its instance profile, your policy doesn't have to be that complex. By default, everything is implicitly denied, so you only need explicit allow:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["ssm:GetParametersByPath"],
"Resource": ["*"]
}
]
}
For Resource you can add only the ssm parameters that you want to allow access to, not all of them.
It seems to be impossible to allow developers to create Lambdas and create or maintain SAM Applications in AWS without essentially having AdministratorAccess policies attached to their developer's role. AWS documents a suggested IAM setup where everyone is simply Administrator, or only has IAMFullAccess, or a even more specific set of permissions containing "iam:AttachRolePolicy" which all boils down to still having enough access to grant the AdministratorAccess permission to anyone at will with just 1 API call.
Besides creating a new AWS Account for each SAM or Lambda deployment there doesn't seem to be any secure way to manage this, but I really hope I'm missing something obvious. Perhaps someone knows of a combination of tags, permission boundaries and IAM Paths that would alleviate this?
The documentation I refer to: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-permissions.html which opens with:
There are three main options for granting a user permission to manage
serverless applications. Each option provides users with different
levels of access control.
Grant administrator permissions.
Attach necessary AWS managed policies.
Grant specific AWS Identity and Access Management (IAM) permissions.
Further down, a sample application is used to specify slightly more specific permissions:
For example, the following AWS managed policies are sufficient to
deploy the sample Hello World application:
AWSCloudFormationFullAccess
IAMFullAccess
AWSLambda_FullAccess
AmazonAPIGatewayAdministrator
AmazonS3FullAccess
AmazonEC2ContainerRegistryFullAccess
And at the end of the document an AWS IAM Policy document describes a set of permissions which is rather lengthy, but contains the mentioned "iam:AttachRolePolicy" permission with a wildcard resource for roles it may be applied on.
AWS has a PowerUserAccess managed policy which is meant for developers. It gives them access to most of the services and no access to admin activities including IAM, Organization and Account management.
You can create an IAM Group for developers (Say Developers) and add the managed policy PowerUserAccess to the group. Add developers to this group.
For deploying with SAM, the developers would need a few IAM permissions to create roles, tag roles. While rolling back a CloudFormation Stack, they may need a few delete permissions. While allowing the developers to create new roles for Lambda functions, you need to ensure they don't escalate privileges by using permissions boundary. A good starting point again would be to set the permissions boundary to PowerUserAccess. (until you figure out what is the right level of permissions)
Create a Policy something like this
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadRole",
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:GetRolePolicy",
"iam:ListRoleTags"
],
"Resource": "arn:aws:iam::ReplaceWithYourAWSAccountNumber:role/*FunctionRole*"
},
{
"Sid": "TagRole",
"Effect": "Allow",
"Action": [
"iam:UntagRole",
"iam:TagRole"
],
"Resource": "arn:aws:iam::ReplaceWithYourAWSAccountNumber:role/*FunctionRole*"
},
{
"Sid": "WriteRole",
"Effect": "Allow",
"Action": [
"iam:DeleteRole",
"iam:DeleteRolePolicy",
"iam:AttachRolePolicy",
"iam:PutRolePolicy",
"iam:PassRole",
"iam:DetachRolePolicy"
],
"Resource": "arn:aws:iam::ReplaceWithYourAWSAccountNumber:role/*FunctionRole*"
},
{
"Sid": "CreateRoleWithPermissionsBoundry",
"Effect": "Allow",
"Action": [
"iam:CreateRole"
],
"Resource": "arn:aws:iam::ReplaceWithYourAWSAccountNumber:role/*FunctionRole*",
"Condition": {
"StringEquals": {
"iam:PermissionsBoundary": "arn:aws:iam::aws:policy/PowerUserAccess"
}
}
}
]
}
Note: It assumes the Lambda function names in the SAM template contains the word Function in them. (Replace the AWS Account Number in the ARNs).
Now you can attach the above policy to the Developers IAM Group. (This would give the SAM deployment permissions to all the developers)
Or you can create another IAM Group for SAM developers (Say SAM-Developers) and attach the above policy to the SAM-Developers group. Now add the appropriate developers (who need to deploy using SAM) to this new IAM group (SAM-Developers).
Define the Permissions Boundary in the SAM templates as well.
Here is an example PermissionsBoundary in SAM template.
Globals:
Function:
Timeout: 15
PermissionsBoundary: arn:aws:iam::aws:policy/PowerUserAccess
With that, the developers should be able to deploy using SAM provided they do not have any restrictive permission boundary.
You can set the permission boundary to AdministratorAccess for the developers or create a new Policy which combines the permissions of PowerUserAccess and the above defined policy for 'SAM' deployments. Then set this new Policy as the permission boundary for the developers.
This solution is for reference and you can build upon this. The PowerUserAccess has been set as the permissions boundary for the Lambda function roles. The PowerUserAccess is too permissive and you should further work on this to find out the right level of permission for your developers and the Lambda functions.
Sidenote: You can use this policy to allow the users to manage their own credentials.
As per title, what is the purpose of having the resource field when defining a resource policy when the resource policy is already going to be applied to a particular resource.
For example, in this aws tutorial, the following policy is defined an attached to a queue. What is the purpose of the resource field?
{
"Version": "2008-10-17",
"Id": "example-ID",
"Statement": [
{
"Sid": "example-statement-ID",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"SQS:SendMessage"
],
"Resource": "arn:aws:sqs:REGION:ACCOUNT-ID:QUEUENAMEHERE",
"Condition": {
"ArnLike": { "aws:SourceArn": "arn:aws:s3:*:*:bucket-name" }
}
}
]
}
S3 is a good example of where you need to include the resource statement in the policy. Let's say you want to have a upload location on S3 bucket.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"Upload",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:PutObject"],
"Resource":["arn:aws:s3:::examplebucket/uploads/*"]
}
]
}
In these cases you really don't want to default the Resource to the bucket as it could accidentally cause global access. It is better to make sure the user clearly understands what access is being allowed or denied.
But why make it required for resource policies where it isn't need like SQS? For this let's dive into how resource policies are used.
You can grant access to a resources 2 ways:
Identity based policies for IAM principals (users and roles).
Resource based policies
The important part to understand is how are resource polices used? Resource policies are actually used by IAM in the policy evaluation logic for authorization. To put it another way, resources are not responsible for the actual authorization that is left to IAM (Identity and Access Management).
Since IAM requires that every policy statement have a Resource or NotResource this means the service would need to add the resource when sending it to IAM if it was missing. So let us look at the implications from a design perspective of having the service add the resource if it is missing.
The service no longer would need to just verify the policy is correct.
If the resource is missing from the statement the service would need to update the policy before sending it to IAM.
There is now the potential for two different versions of a resource policy. The one the user created for editing and the one sent to IAM.
It increases the potential for user error and accidentally opening up access by attaching a policy to the wrong resource. If we modify the policy statement in the question drop the resource and condition statement we have a pretty open policy. This could easily be attached to the wrong resource especially from the CLI or terraform.
{
"Sid": "example-statement-ID",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"*"
]
}
Note I answered this from a general design perspective based on my understanding of how AWS implements access management. How AWS implemented the system might be a little different but I doubt it because policy evaluation logic really needs to be optimized for performance so it's better do to that in in one service, IAM, instead of in each service.
Hope that helps.
Extra reading if you are interested in the details of the Policy Evaluation Logic.
You can deny access 6 ways:
Identity Policy
Resource policies
Organizational Polices if your account is part of an organization
IAM permission boundaries if set
Session Assumed Policy if used
Implicitly if there was no allow policy
Here is the complete IAM policy evaluation logic workflow.
There is a Policy as you defined.
Policy applied resource : A, I don't know where you will apply this.
The resource in the policy : B, arn:aws:sqs:REGION:ACCOUNT-ID:QUEUENAMEHERE
Once you apply the polity to some service like ec2 instance that is A, then the instance only can do SQS:SendMessage through the resource B. A and B are totally different.
If you want to restrict the permission for the resource A that shouldn't access to other resources but can only access to the defined resources, then you have to define the resource such as B in the policy.
Your policy is only valid for that resource B and this is not the resource what you applied A.
I've been testing my continuous deployment setup, trying to get to a minimal set of IAM permissions that will allow my CI IAM group to deploy to my "staging" Elastic Beanstalk environment.
On my latest test, my deployment got stuck. The last event in the console is:
Updating environment staging's configuration settings.
Luckily, the deployment will time out after 30 minutes, so the environment can be deployed to again.
It seems to be a permissions issue, because if I grant s3:* on all resources, the deployment works. It seems that when calling UpdateEnvironment, Elastic Beanstalk does something to S3, but I can't figure out what.
I have tried the following policy to give EB full access to its resource bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::elasticbeanstalk-REGION-ACCOUNT/resources/_runtime/_embedded_extensions/APP",
"arn:aws:s3:::elasticbeanstalk-REGION-ACCOUNT/resources/_runtime/_embedded_extensions/APP/*",
"arn:aws:s3:::elasticbeanstalk-REGION-ACCOUNT/resources/environments/ENV_ID",
"arn:aws:s3:::elasticbeanstalk-REGION-ACCOUNT/resources/environments/ENV_ID/*"
]
}
]
}
Where REGION, ACCOUNT, APP, and ENV_ID are my AWS region, account number, application name, and environment ID, respectively.
Does anyone have a clue which S3 action and resource EB is trying to access?
Shared this on your blog already, but this might have a broader audience so here it goes:
Following up on this, the ElastiBeanstalk team has provided me with the following answer regarding the S3 permissions:
"[...]Seeing the requirement below, would a slightly locked down version work? I've attached a policy to this case which will grant s3:GetObject on buckets starting with elasticbeanstalk. This is essentially to allow access to all elasticbeanstalk buckets, including the ones that we own. The only thing you'll need to do with our bucket is a GetObject, so this should be enough to do everything you need."
So it seems like ElasticBeanstalk is accessing buckets out of anyone's realm in order to work properly (which is kind of bad, but that's just the way it is).
Coming from this, the following policy will be sufficient for getting things to work with S3:
{
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::elasticbeanstalk-<region>-<account_id>",
"arn:aws:s3:::elasticbeanstalk-<region>-<account_id>/",
"arn:aws:s3:::elasticbeanstalk-<region>-<account_id>/*"
],
"Effect": "Allow"
},
{
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::elasticbeanstalk*",
"Effect": "Allow"
}
Obviously, you need to wrap this into a proper policy statement that IAM understands. All your previous assumptions about IAM policies have proven right though so I'm guessing this shouldn't be an issue.