I have a AWS Lambda function in production. Triggering it can lead to monetary transactions. I want to block the feature of testing this lambda through AWS console so that users having console access cannot accidentally trigger it for the purpose of testing which they can do on the corresponding staging lambda. Is it somehow possible?
First solution that I would recommend is to not mix production and other workloads in the same AWS account. Combine that with not giving your developers and users credentials to the production account.
Assuming that you don't want to do that, you could apply a resource policy on the Lambda function that denies all regular IAM users permission to invoke the Lambda function. Be sure that your policy does not deny the 'real' source in your production system (e.g. API Gateway or SQS or S3). You should also prevent your users from modifying the resource policy on the Lambda function.
Alternatively, if all of your IAM users are managed under IAM groups, then you could apply an additional group policy that denied all actions on the Lambda function ARN. Again, ensure that they cannot modify the group policy to remove this control.
Related
I have registered a free tier AWS Lambda account and created a simple, public service for me and others to play around with. However, since I do not know yet how usage is going to be, I want to be careful for now. Otherwise someone could simply flood my service with one million requests and I get billed for it. I'd rather not have the service available.
Therefore, I want to create a budget action that shuts down all services as soon as $0.01 is exceeded. The way I've done this is that I've granted the Lambda service role (which was auto-created when I setup the lambda service) the budget permission (budgets.amazonaws.com) and then have an IAM action setup that adds the AWSDenyAll managed policy to the role itself once the budget is exceeded.
This does not seem to work. If I manually attach the AWSDenyAll policy, the Lambda service still is available. My understanding of the roles/policies system may be also fundamentally wrong.
How can I achieve a "total shutdown" action that can be triggered from a budget alert?
You're applying the AWSDenyAll policy to the execution role of the Lambda function, which is used to define permissions to access AWS resources from the Lambda itself (Configuration > Permissions > Execution role).
You essentially have blocked the Lambda function itself from accessing AWS services.
You haven't blocked any IAM principals (users or roles), AWS services (including API Gateway) or other AWS accounts which is why your Lambda can still be invoked manually or via the gateway.
Now, a question that may now arise is "how can I prevent the API Gateway from invoking my Lambda?".
The way that API Gateway is given access to trigger your Lambda is via resource-based permissions policies (Configuration > Permissions > Resource-based policy).
This is not "encapsulated" within an IAM entity (user or role) and currently, you can only update resource-based policies for Lambda resources within the scope of the AddPermission and AddLayerVersionPermission API actions.
This means that the only way to revoke API Gateway's access to invoking your function would be to delete the resource policy allowing API Gateway to invoke your function using the RemovePermission API action or via the console.
There would be no way to do this via budget actions.
The other question that can arise is "how can I prevent API Gateway and the Lambda function from being invoked then?".
This still wouldn't be possible using Budget Actions as per docs, you can only apply an IAM policy or a service control policy (SCP) none of which will help you to prevent triggering of a Lambda which is triggered via the API Gateway. You can prevent the Lambda from being triggered by AWS users within the console but you can't prevent the API Gateway unless you are using IAM to authenticate your users.
There isn't any way to "shutdown" Lambda functions or the API Gateway once you hit a specific budget limit.
You will just have to create a budget to alert you, filter on the service dimension to the Lambda and API Gateway services for example, and then take manual action (setting a monthly usage budget with a fixed usage amount and actual/forecasted notifications).
I'm struggling to understand the practical differences between an execution role that can be assumed by API gateway to grant the permission to execute a lambda over a lambda resource-based policy.
For example, the documentation here provides an example of a policy that can be assumed by the API gateway to invoke a Lambda.
However, the API Gateway console will grant itself permission to access a Lambda via a lambda resource-based policy.
Both achieve the desired outcome of allowing the API Gateway to execute a Lambda. So is there a reason to choose one over the other?
Apart from the general use case / advantages of having resource-based policy that is explained pretty well here
does not have to give up his or her permissions to receive the role permissions
In this specific case, I have experienced 2 distintive advantages using Lambda's resource based policy over role
The creator of Lambda - API Gateway integration does not need to have access to IAM. No role created
Because no role is created, no role need to be cleaned up. The developer delete the Lambda function he created to play around and everyone can forget about it
I think one of the significant advantages of resource-based policies is that they can be applied to specific versions or aliases. This is unlike IAM roles, which cannot target a specific version or alias.
I want to provide a freelancer the ability to test, debug and deploy lambda functions in the console.
However the roles i saw until now are very restrictive (only logging) or very wide like AWSLambdaFullAccess: full S3 access(?)
What is the right role here, or do i have to create a custom one?
There are two sets of permissions here.
First, there are the permissions that you are giving the freelancer. These should be sufficient to test, debug and deploy the Lambda function. You might want to limit which functions they are allowed to edit (eg based on a prefix in the function name).
Second, there is the IAM Role that is associated with the Lambda function. This will need sufficient permission to perform whatever task the Lambda function doing (eg accessing Amazon S3).
The freelancer will probably need iam:PassRole permission to be able to select an IAM Role for the Lambda functions (or I wonder if you can set that, and they simply cannot edit the role?).
Be very careful when you assign the freelancer iam:PassRole permission because if you do not limit which roles they can pass to Lambda, then they can effectively gain access to any IAM Role in your system (including those for Admins). You should limit which Roles they can pass.
I have a lambda which fetches data from Kinesis stream. When assigning the permissions, we give the lambda execution role a policy to access Kinesis stream. But, we don't give any permission to Kinesis that it allows that lambda the permission to get data from it? Why is it so?
Similarly, lambda with Dynamodb is the similar case. But when we do integrate lambda with Api gateway, in this case, we add permission to lambda that API gateway can invoke it.
I wanted to understand the basic concept of IAM permissions and roles which would define which resource we should give permissions and which one we shouldn't. I am quite naive while knowing these concepts of IAM. Any explanation on this thing you can give would be really helpful.
Lambda execution role grants it permission to access necessary AWS services and resources. Lambda will assume the role during execution.
That is why, as you mentioned, you give Kinesis (or) DynamoDB permissions because you perform operations on these services within lambda
However, the permission you add for API Gateway is a resource based policy to allow an API Gateway (or any AWS service) to invoke your function.
Reference:
https://docs.aws.amazon.com/lambda/latest/dg/lambda-permissions.html
My code is running on an EC2 machine. I use some AWS services inside the code, so I'd like to fail on start-up if those services are unavailable.
For example, I need to be able to write a file to an S3 bucket. This happens after my code's been running for several minutes, so it's painful to discover that the IAM role wasn't configured correctly only after a 5 minute delay.
Is there a way to figure out if I have PutObject permission on a specific S3 bucket+prefix? I don't want to write dummy data to figure it out.
You can programmatically test permissions by the SimulatePrincipalPolicy API
Simulate how a set of IAM policies attached to an IAM entity works with a list of API actions and AWS resources to determine the policies' effective permissions.
Check out the blog post below that introduces the API. From that post:
AWS Identity and Access Management (IAM) has added two new APIs that enable you to automate validation and auditing of permissions for your IAM users, groups, and roles. Using these two APIs, you can call the IAM policy simulator using the AWS CLI or any of the AWS SDKs. Use the new iam:SimulatePrincipalPolicy API to programmatically test your existing IAM policies, which allows you to verify that your policies have the intended effect and to identify which specific statement in a policy grants or denies access to a particular resource or action.
Source:
Introducing New APIs to Help Test Your Access Control Policies
Have you tried the AWS IAM Policy Simulator. You can use it interactively, but it also has some API capabilities that you may be able to use to accomplish what you want.
http://docs.aws.amazon.com/IAM/latest/APIReference/API_SimulateCustomPolicy.html
Option 1: Upload an actual file when you app starts to see if it succeeds.
Option 2: Use dry runs.
Many AWS commands allow for "dry runs". This would let you execute your command at the start without actually doing anything.
The AWS CLI for S3 appears to support dry runs using the --dryrun option:
http://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
The Amazon EC2 docs for "Dry Run" says the following:
Checks whether you have the required permissions for the action, without actually making the request. If you have the required permissions, the request returns DryRunOperation; otherwise, it returns UnauthorizedOperation.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/CommonParameters.html