What I am trying to do
Have a Lambda function access (with Read and Write permissions) a specific Bucket I own.
Just that one Bucket though, it doesn't need to access anything else.
What I've done
I set up a public Bucket (that's actually right, I actually want its contents to be accessible to anybody), named orca-resources
I created an IAM Role named lamba-s3-orca-resources. It is set up to be used by the Lambda service.
I created a Lambda function that will eventually be triggered by API Gateway
I wrote the least code I could to try and break my policies: I'm accessing another Bucket of mine, that is not explicitly allowed to be accessed by my lamba-s3-orca-resources IAM Role
Testing the function actually yields the contents of the Bucket I wish to be unavailable. Also, s3:listObjects ain't even in my Allowed Actions.
What am I overlooking?
The Lambda function
Its code:
'use strict';
const aws = require('aws-sdk');
const s3 = new aws.S3();
exports.handler = (event, context, callback) => {
s3.listObjects({Bucket: 'orca-exe'}, callback);
};
Its Execution Role:
{
"roleName": "lamba-s3-orca-resources",
"policies": [
{
"document": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectTagging",
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::orca-resources/*",
"arn:aws:s3:::orca-resources"
]
}
]
},
"name": "orca-resources-only",
"type": "inline"
}
]
}
“AWS Lambda function seems to ignore its specified Execution Role”
In one sense, this is true and by design, because the details of the execution role aren't something that the Lambda service or your function are actually aware of. The Lambda function and the Lambda infrastructure aren't actually looking at or aware of the permissions associated with the execution role. Lambda isn't responsible for enforcing this policy.
There's a bit of black magic going on behind the scenes that stitches all of this together, and understanding how that works may help explain why the behavior you're seeing is expected and correct.
First things first, the Lambda Execution Role is an IAM Role.
Q: What is an IAM role?
An IAM role is an IAM entity that defines a set of permissions for making AWS service requests. IAM roles are not associated with a specific user or group. Instead, trusted entities assume roles, such as IAM users, applications, or AWS services such as EC2.
— https://aws.amazon.com/iam/faqs/
When you assume a role, you're issued a set of temporary credentials -- an access key and secret (similar to IAM user credentials) and a security or session token that conveys the associated privileges, all which must accompany these credentials when they are used for signing requests. In Lambda functions, this is all done automatically.
When your Lambda function is invoked, the Lambda service itself calls AssumeRole in Session Token Service.
AssumeRole
Returns a set of temporary security credentials (consisting of an access key ID, a secret access key, and a security token) that you can use to access AWS resources that you might not normally have access to.
— https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html
This action -- which is allowed by your role's trust policy -- returns a set of credentials that are dropped into the environment of your Lambda container as environment variables with names like AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN (with the specific names dependent on the environment).
The AWS SDK then automatically uses these environment variables to sign all of the requests you generate within your Lambda function.
Nothing in Lambda screens these requests against the role policy, for a couple of different reasons: that would be a much more complicated and vulnerability-prone solution, compared to simply allowing the normal authentication and authorization mechanisms of the destination service to authenticate and authorize the request... but perhaps more importantly, the Lambda service can't actually tell what actions your code is attempting via the AWS SDK because those requests are transmitted to the service endpoint over HTTPS by default, so it isn't possible for them to be inspected.
The service you're making a request against then authenticates the credentials and authorizes the request with help from IAM and STS -- determining whether the request signature is valid, the accompanying token is valid, and whether the attempted action against the specified resource is allowed.
That last bit is where your assumptions are blurry.
The question that then must be answered by the service handling the request is two-fold:
does the principal making the request have permission from its own account to make the request, and
does the principal making the request have permission from the account that owns the resource to make the request
These are two questions, but when the principal (the role) and the resource (the bucket) are owned by the same account, they can coalesce into a single question that is answered by combining the results from multiple places.
Access policy describes who has access to what. You can associate an access policy with a resource (bucket and object) or a user. Accordingly, you can categorize the available Amazon S3 access policies as follows:
Resource-based policies – Bucket policies and access control lists (ACLs) are resource-based because you attach them to your Amazon S3 resources.
User policies – You can use IAM to manage access to your Amazon S3 resources. You can create IAM users, groups, and roles in your account and attach access policies to them granting them access to AWS resources, including Amazon S3.
When Amazon S3 receives a request, it must evaluate all the access policies to determine whether to authorize or deny the request. For more information about how Amazon S3 evaluates these policies, see How Amazon S3 Authorizes a Request.
— https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-overview.html
This explains the apparently contradictory behavior, as you noted in comments:
“Also, the documentation here seems to pretty clearly indeed deny what's not explicitly allowed.”
The documentation is correct, and you're correct, except that you've failed to consider that what's explicitly allowed includes what the resource owner (the account owner for the bucket) has already explicitly allowed.
If the bucket owner has allowed everyone to perform a particular action... well, the execution role of your Lambda function is part of everyone. Therefore, a grant of access in the Lambda execution role policy would be redundant and unnecessary, because the account owner has already given that permission to everyone. The lack of an explicit Deny in the role policy means the explicit Allow in the bucket policy allows the proposed action to occur.
“Plus, the Policy Simulator does give me exactly the behaviour I expect.”
The policy simulator doesn't make real calls to real services, so it's necessary -- when a resource like a bucket has its own policies -- to explicitly include the policy for the resource itself in the simulation.
To use a resource-based policy in the simulator, you must include the resource in the simulation and select the check box to include that resource's policy in the simulation.
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_testing-policies.html
Otherwise, you're only testing the role policy in isolation.
Note that there is some confusion in the documentation about the S3 policy for listing the objects in a bucket. The action for allowing this in IAM policies is s3:ListBucket in spite of the fact that the Node.JS SDK method call is listObjects(). There is an action in the policy simulator called ListObjects that is either part of planned/future functionality or simply an error, because at last check that didn't correspond to a valid IAM policy action for S3. In the S3 section of the IAM User Guide, s3:ListBucket is correctly hyperlinked to the List Objects action in the S3 API Reference, but s3:ListObjects is a circular hyperlink right back to the same page in the IAM User Guide (a link to nowhere). I've tried, so far without success, to find someone at AWS to explain or correct this discrepancy.
Related
I'm a newbie in AWS and I received this message while running a lambda function. I've read the possibile solutions here The role defined for the function cannot be assumed by Lambda but I did't understand them. How should I better procede?
This means you have configured your lambda to run using a role. But when it runs, the AWS lambda service has not been granted permission to assume the role you configured. Essentially, AWS Lambda needs to be granted permission to assume the role you chose.
Suppose you had a role with Administrator access. Suppose you are a non-administrator developer who is creating a lambda. If you have the ability to create a lambda and specify the role with Administrator access as the one to run your lambda, you can effectively do anything an Administrator can do. It would be a security breach. If whomever owns the Administrator role wants to grant the AWS lambda service permission to use the role, then they would effectively be granting you permission to run things as an Administrator. But unless/until they grant the permission, lambda won't be able to run under that role.
See https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html. The role you want to use needs to allow lambda to assume it, which is done by this sort of policy:
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
That statement effectively says, "allow the lambda service to assume this role." So to proceed, you should verify that the role you've chosen to run your lambda has a policy such as this. Choose the role in IAM and look at the Trust Relationships tab. It needs to list lambda.amazonaws.com--if it doesn't you need to edit it and add that.
Also, lambdas run based on various triggers--some triggered by a person, but others triggered by an event. So because events can trigger the lambda to run, that means you must grant the aws lambda service the permission to use the role you have specified for the lambda.
I am working on aws SAM project and i have a requirement of giving access to my S3 bucket to multiple iam users from unknown aws accounts but i can't make bucket publicly accessible. I want to secure my bucket as well as i want any iam user from any aws account to access the contents of my S3 bucket. Is this possible?
Below is the policy i tried and worked perfectly.
{
"Version": "2012-10-17",
"Id": "Policy1616828964582",
"Statement": [
{
"Sid": "Stmt1616828940658",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/STS_Role_demo"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::new-demo-bkt/*"
}
]
}
Above policy is for one user but i want any user from other AWS account to access my contents without making the bucket and objects public so how can i achieve this?
This might be possible using a set of Conditions on the incoming requests.
I can think of two options:
You create an IAM role that your SAM application uses even when running in other accounts
You create S3 bucket policies that allow unknown users access
If you decide to look into S3 bucket policies, I suggest using an S3 Access Point to better manage access policies.
Access points are named network endpoints that are attached to buckets
that you can use to perform S3 object operations, such as GetObject
and PutObject. Each access point has distinct permissions and network
controls that S3 applies for any request that is made through that
access point. Each access point enforces a customized access point
policy that works in conjunction with the bucket policy that is
attached to the underlying bucket.
You can use a combination of S3 Conditions to restrict access. For example, your SAM application could include specific condition keys when making S3 requests, and the bucket policy then allows access based on those conditions.
You can also apply global IAM conditions to S3 policies.
This isn't great security though, malicious actors might be able to figure out the headers and spoof requests to your bucket. As noted on some conditions such as aws:UserAgent:
This key should be used carefully. Since the aws:UserAgent value is
provided by the caller in an HTTP header, unauthorized parties can use
modified or custom browsers to provide any aws:UserAgent value that
they choose. As a result, aws:UserAgent should not be used to
prevent unauthorized parties from making direct AWS requests. You can
use it to allow only specific client applications, and only after
testing your policy.
What exactly does this AWS role do?
The most relevant bits seem to be:
"Action": "sts:AssumeRole", and
"Service": "ec2.amazonaws.com"
The full role is here:
resource "aws_iam_role" "test_role" {
name = "test_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
From: https://www.terraform.io/docs/providers/aws/r/iam_role.html
To understand the meaning of this it is necessary to understand some details of how IAM Roles work.
An IAM role is similar to a user in its structure, but rather than it being accessed by a fixed set of credentials it is instead used by assuming the role, which means to request and obtain temporary API credentials that allow taking action with the privileges that are granted to the role.
The sts:AssumeRole action is the means by which such temporary credentials are obtained. To use it, a user or application calls this API using some already-obtained credentials, such as a user's fixed access key, and it returns (if permitted) a new set of credentials to act as the role. This is the mechanism by which AWS services can call into other AWS services on your behalf, by which IAM Instance Profiles work in EC2, and by which a user can temporarily switch access level or accounts within the AWS console.
The assume role policy determines which principals (users, other roles, AWS services) are permitted to call sts:AssumeRole for this role. In this example, the EC2 service itself is given access, which means that EC2 is able to take actions on your behalf using this role.
This role resource alone is not useful, since it doesn't have any IAM policies associated and thus does not grant any access. Thus an aws_iam_role resource will always be accompanied by at least one other resource to specify its access permissions. There are several ways to do this:
Use aws_iam_role_policy to attach a policy directly to the role. In this case, the policy will describe a set of AWS actions the role is permitted to execute, and optionally other constraints.
Use aws_iam_policy to create a standalone policy, and then use aws_iam_policy_attachment to associate that policy with one or more roles, users, and groups. This approach is useful if you wish to attach a single policy to multiple roles and/or users.
Use service-specific mechanisms to attach policies at the service level. This is a different way to approach the problem, where rather than attaching the policy to the role, it is instead attached to the object whose access is being controlled. The mechanism for doing this varies by service, but for example the policy attribute on aws_s3_bucket sets bucket-specific policies; the Principal element in the policy document can be used to specify which principals (e.g. roles) can take certain actions.
IAM is a flexible system that supports several different approaches to access control. Which approach is right for you will depend largely on how your organization approaches security and access control concerns: managing policies from the role perspective, with aws_iam_role_policy and aws_iam_policy_attachment, is usually appropriate for organizations that have a centralized security team that oversees access throughout an account, while service-specific policies delegate the access control decisions to the person or team responsible for each separate object. Both approaches can be combined, as part of a defense in depth strategy, such as using role- and user-level policies for "border" access controls (controlling access from outside) and service-level policies for internal access controls (controlling interactions between objects within your account).
More details on roles can be found in the AWS IAM guide IAM Roles. See also Access Management, which covers the general concepts of access control within IAM.
Let's assume a user-based IAM policy i.e. one that can be attached to a user, group or role.
Let's say one that gives full access to a DynamoDB table:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "dynamodb:*",
"Resource": "arn:aws:dynamodb:us-west-2:123456789:table/Books"
}
}
Based on this policy, any user who somehow ends up with that policy attached to them (via assuming a role or directly for example) gets full access to that DynamoDB table.
Question 1: Is it worth having a resource-based policy on the other end i.e. on the DynamoDB table to complement the user-based policy?
Example:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": {"AWS": "arn:aws:iam::123456789012:user/bob"},
"Action": "dynamodb:*",
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/Books"
}
The motivation here is that the previous policy might end up being attached to someone by accident and using the resource-based one would ensure that only user Bob will ever be given these permissions.
Question 2: Is using the stricter resource-policy only preferable maybe?
Question 3: In general, are there any best practices / patterns for picking between user-based vs resource-based policies (for the services that support resource-based policies that is)?
Answer 0: DynamoDB does not support resource-based policies.
The Console GUI looks like it, but the API does not have an operation for that.
And the documentation is clear: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/access-control-overview.html#access-control-manage-access-resource-based
Answer 1: Do not use IAM and resource policies on the same resource
The challenge with access control is maintaining it over the long run:
Permissions for new hires must be set up correctly and swiftly (they want to work!)
Permissions for leavers must be removed swiftly (for whatever reason)
Someone has to regulary review the permissions and approve them
All of the 3 tasks above are much easier, if there is only a single location where to look for. And use "Effect": "Deny", if you want to restrict access.
Any "accidental assignment" would be caught by the review.
Answer 1b:
Of course it depends on the use case (e.g. 4 eyes principle can demand it). And some permissions cannot be set in IAM, ( e.g. "Everyone") and must be set on the resource. Or if you destroy/recreate the resource, the resource-based permission disappears.
Answer 2: IAM policy is easier to manage
If the situation allows both IAM and resource policy, they have the same grammar and can be made equally strict, at least in your case. Assuming all other being equal, IAM policies are much easier to manage.
Answer 3: Best practice
Unfortunately, I am not aware of a best practice issued by AWS, apart from "minimal privileges" of course. I suggest you go with the best practice in terms of maintainability as for other permissions outside of AWS.
It depends whether you are making a request within same aws account, or a cross-account request.
Within the same AWS account, meaning your user belongs to aws account that owns the resource (S3, SQS, SNS etc), you can have either identity-based (user, group, role) OR a resource-based policy (SQS, SNS, S3, API gateway). Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html#policy-eval-denyallow
However when you are delegating access to different AWS accounts. It can vary. For API gateway, you need explicit allow from identity-based role and resource-based, for example.
source: API Gateway Authorization Flow
The answer for all of your questions is it depends
Both IAM policies and Resource policies are equally important depends upon the usecase.
Let say you want to provide permission to AWS managed services like providing permissions to cloud front to read s3 bucket.. it's better to use resource policies ...
But for uploading/changing content, it's better to via IAM policies..
In simple terms it's better to use IAM policies when providing access from some user/external system/user managed instances.. and for providing access in between AWS managed services use resource policies.
I've been working on this a long time and I am getting nowhere.
I created a user and it gave me
AWSAccessKeyId
AWSSecretKey
I created a bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObjectAcl",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::abc9876/*"
}
]
}
Now when I use a gulp program to upload to the bucket I see this:
[20:53:58] Starting 'deploy'...
[20:53:58] Finished 'deploy' after 25 ms
[20:53:58] [cache] app.js
Process terminated with code 0.
To me it looks like it should have worked but when I go to the console I cannot see anything in my bucket.
Can someone tell me if my bucket policy looks correct and give me some suggestions on what I could do to test out the uploading. Could I for example test this out from the command line?
There are multiple ways to manage access control on S3. These different mechanisms can be used simultaneously, and the authorization of a request will be the result of the interaction of all the rules in all these mechanisms. Things can get confusing!
Let's try to make things easier to understand. You have:
IAM policies - these are policies you define for specific Users or Groups (or Roles, but let's not get into that...).
S3 bucket policies - these are policies that you define at the bucket level.
S3 ACLs (access control lists) - these are rules that you define both at the bucket level and the object level. This is that permissions area mentioned on a comment to another answer.
Whenever you send a request to S3, e.g. downloading an object, the request will be processed by an authorization system. This system will calculate the union of all the policies/rules described above, and then will follow a process that can be simplified as follows:
If there is any rule explicitly denying the request, it's denied. Period. Otherwise...
Otherwise, if there is any rule explicitly allowing the request, it's allowed. Period. Otherwise...
Otherwise, the request is denied.
Let's say you have all the mechanisms in place. For the request to be accepted, you must not have any rules Denying that request, and need to have at least one rule allowing that request.
Making your policies easier to understand...
My suggestion to you is to simplify your policies. Choose one access control mechanism and use stick to that one.
In your specific situation, from your very brief description, I feel that using IAM policies could be a good idea. You can use either an IAM User Policy (that you define and attach specifically to your IAM User) or an IAM Group Policy (that you define and attach to a group your IAM User belongs to). Let's forget about IAM Roles, that is a whole different story.
Then delete your ACLs and Bucket Policies. Your requests should be allowed then.
As an additional hint, make sure the software you are using to upload objects to S3 is actually using those 2 API calls: PutObject and PutObjectAcl. Keep in mind that S3 supports multi-part upload, through the use of a different set of API calls. If your tool is doing multi-part uploads under the hood, then be sure to allow those API calls as well (many tools will, including the AWS CLI, and many SDKs have a higher level S3 API that will do that as well)!
For more information on this matter, I'd suggest the following post from the AWS Security Blog:
IAM policies and Bucket Policies and ACLs! Oh My! (Controlling Access to S3 Resources)
You don't need to define "Principal": "*" , since you have already created a IAM user
The Bucket Policy looks fine, if there was a problem with access it would have given you an appropriate error.
Just make sure your "Keyname" is correct while calling AWS APIs, the keyname which uniquely identifies the object in a bucket.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html