I'm struggling to understand the practical differences between an execution role that can be assumed by API gateway to grant the permission to execute a lambda over a lambda resource-based policy.
For example, the documentation here provides an example of a policy that can be assumed by the API gateway to invoke a Lambda.
However, the API Gateway console will grant itself permission to access a Lambda via a lambda resource-based policy.
Both achieve the desired outcome of allowing the API Gateway to execute a Lambda. So is there a reason to choose one over the other?
Apart from the general use case / advantages of having resource-based policy that is explained pretty well here
does not have to give up his or her permissions to receive the role permissions
In this specific case, I have experienced 2 distintive advantages using Lambda's resource based policy over role
The creator of Lambda - API Gateway integration does not need to have access to IAM. No role created
Because no role is created, no role need to be cleaned up. The developer delete the Lambda function he created to play around and everyone can forget about it
I think one of the significant advantages of resource-based policies is that they can be applied to specific versions or aliases. This is unlike IAM roles, which cannot target a specific version or alias.
Related
I want to provide a freelancer the ability to test, debug and deploy lambda functions in the console.
However the roles i saw until now are very restrictive (only logging) or very wide like AWSLambdaFullAccess: full S3 access(?)
What is the right role here, or do i have to create a custom one?
There are two sets of permissions here.
First, there are the permissions that you are giving the freelancer. These should be sufficient to test, debug and deploy the Lambda function. You might want to limit which functions they are allowed to edit (eg based on a prefix in the function name).
Second, there is the IAM Role that is associated with the Lambda function. This will need sufficient permission to perform whatever task the Lambda function doing (eg accessing Amazon S3).
The freelancer will probably need iam:PassRole permission to be able to select an IAM Role for the Lambda functions (or I wonder if you can set that, and they simply cannot edit the role?).
Be very careful when you assign the freelancer iam:PassRole permission because if you do not limit which roles they can pass to Lambda, then they can effectively gain access to any IAM Role in your system (including those for Admins). You should limit which Roles they can pass.
In the Documentation for Resource-Based Policies for Lambda, it mentions that it's best practice to include the source-account incase for example you specified a source-arn which referred to an s3 bucket which does not have the account id in the arn, so if you were unlucky and somebody deleted your bucket, and another account created a bucket with the same name they could indirectly access your Lambda function.
But then you also have the notation of a Principal, as in one of the examples they have:
"Principal":{"AWS":"arn:aws:iam::210987654321:root"}
What is the difference between Principal & source-account. Do you use the Principal in the case when you want to refine the permissions down to a particular role or user within an account? And if this isn't your situation and you only want to grant access to your Lambda from an entire account you would use source-account?
One reason of using the aws:SourceAccount is the mitigation of The Confused Deputy Problem.
Specifically, in the context of S3, it is used so that S3 is not considered as the confused deputy.
The principal is what has the permission to trigger the resource, for example in this case the principal is actually the S3 service. This is because S3 is not configured to assume IAM roles, the service is actually the caller of the Lambda function.
The conditions underneath then scope the permissions to only allow the S3 service to call when it is coming from the source account/bucket. Without this it would be an open scope to Amazon S3.
You're correct that principals can be used to reference IAM users/roles and in your example the entire AWS account (assuming the caller is actually an IAM user/role). You would use this method if the caller was an IAM entity vs another AWS service.
I'm creating a serverless app using API Gateway and Lambda. When creating roles for my API. What is best practice, How granular should I get?
Should I create a new role for every resource?
Or every method?
Or for API Gateway and Lambda respectfully?
Its application specific, but the general rule is to follow AWS best practice and grant least privilege permissions for accessing your resources.
Following the AWS best practice, you:
Start with a minimum set of permissions and grant additional permissions as necessary. Doing so is more secure than starting with permissions that are too lenient and then trying to tighten them later.
More specific to your question. API Gateway don't have roles, they have resource policies. They are generally used to specify permissions regarding who/what can invoke your API. I would recommend checking out official AWS examples of such policies and model your policies based on them, which includes how detailed they are: API Gateway resource policy examples
Since in your setup, lambda is going to be accessing your other resources (e.g. S3, DynamoDB) you should specify its permissions to access these resources in its execution role.
If you have several lambda functions having same permission they can reuse the same role. Also if you want to use different roles, you may create IAM custom managed policies which you could attach and reuse across different roles.
I have a AWS Lambda function in production. Triggering it can lead to monetary transactions. I want to block the feature of testing this lambda through AWS console so that users having console access cannot accidentally trigger it for the purpose of testing which they can do on the corresponding staging lambda. Is it somehow possible?
First solution that I would recommend is to not mix production and other workloads in the same AWS account. Combine that with not giving your developers and users credentials to the production account.
Assuming that you don't want to do that, you could apply a resource policy on the Lambda function that denies all regular IAM users permission to invoke the Lambda function. Be sure that your policy does not deny the 'real' source in your production system (e.g. API Gateway or SQS or S3). You should also prevent your users from modifying the resource policy on the Lambda function.
Alternatively, if all of your IAM users are managed under IAM groups, then you could apply an additional group policy that denied all actions on the Lambda function ARN. Again, ensure that they cannot modify the group policy to remove this control.
I have a lambda which fetches data from Kinesis stream. When assigning the permissions, we give the lambda execution role a policy to access Kinesis stream. But, we don't give any permission to Kinesis that it allows that lambda the permission to get data from it? Why is it so?
Similarly, lambda with Dynamodb is the similar case. But when we do integrate lambda with Api gateway, in this case, we add permission to lambda that API gateway can invoke it.
I wanted to understand the basic concept of IAM permissions and roles which would define which resource we should give permissions and which one we shouldn't. I am quite naive while knowing these concepts of IAM. Any explanation on this thing you can give would be really helpful.
Lambda execution role grants it permission to access necessary AWS services and resources. Lambda will assume the role during execution.
That is why, as you mentioned, you give Kinesis (or) DynamoDB permissions because you perform operations on these services within lambda
However, the permission you add for API Gateway is a resource based policy to allow an API Gateway (or any AWS service) to invoke your function.
Reference:
https://docs.aws.amazon.com/lambda/latest/dg/lambda-permissions.html