I have a lambda function in Account A which will retrieve data from a source. This lambda will then need to trigger off a lambda in Account B in order to pass this data to it which then will be uploaded to DynamoDB.
I understand there will be some sort of cross-account-permissions required but am a little unsure if i need to allow these permission in Account A or Account B. I can see on AWS examples of triggering a lambda from an S3 bucket but that's not helping with what I want to do.
I could potentially have an API Gateaway in the middle for Lambda A to interact with Lambda B but that's just adding an extra resource that's not really required.
Your AWS Lambda function in account A would call the Lambda.invoke() method in the AWS SDK for whatever programming language you are writing the Lambda function with.
I understand there will be some sort of cross-account-permissions
required but am a little unsure if I need to allow these permission in
Account A or Account B
Account B Lambda is the one being called, so Account B has to give permission to Account A to make that call.
Here you got an example of cross-account permissions with lambda function: https://yogeshnile.cloud/configure-a-lambda-function-to-assume-an-iam-role-in-another-aws-account-e005e7533a71
Trigger Lambda in Account B from Lambda in Account A -> Lambda.invoke()
Btw, you don't need the lambda function in account B - you can add permissions to your DynamoDB table to assumed role, so your lambda from account A will be able to write data directly into DynamoDB on account B.
Related
We have a setup like this in AWS -
Step functions include few lambda functions that write to S3 buckets
S3 bucket is passed as an argument to lambda functions by the user
API Gateway is setup to invoke Step Functions and IAM Authorization is enabled
However, currently the step function and lambdas are invoked under the IAM role defined, but we want all lambdas to be executed as the authenticated user. So if the user invoking API does not have access to S3 bucket passed, the lambda should fail. How can this be achieved ?
One of the responsibilities of Amazon API Gateway is to be a facade for your backend (here Step function and Lambda functions) and to guard it from unauthorized invocation.
I see two options. The first is easy, the second is more proper way to have all constrols.
Don't give your IAM users permissions to call this API if they don't have permissions to access data in S3 bucket. Also, remove permissions to access Step Function and Lambda Functions. Apply the principle of least privilege.
Instead of using IAM Users, use Amazon Cognito to authenticate your users to your application. Attach Cognito as an Authorizer to your API. Your Lambda function can get information about the user via context input parameter. Use DynamoDB to store additional information about the user and add business logic to your Lambda to handle any special behavior.
I have added the bucket policies and from Account A -> I can access buckets at Account B.
Is there any proper structured method of how the same can be achieved by A lambda function or a step function from Account A to Account B.
I have created the Lambda execution role too, but not sure how to do and what to do.
please help
Find the answer below at the link, But generally speaking it did not take long to google it...
https://aws.amazon.com/premiumsupport/knowledge-center/lambda-function-assume-iam-role/
Just wondering if anyone has done encountered the following use case:
Account A has a step functions state machine
Account B has a DynamoDB table
Allow the state machine from Account A to PutItem into DynamoDB table in Account B
I know if we use Lambda with step functions, it allows resource based policies and we can allow "Principal" in Lambda as the state machine arn from another account and execute the lambda function in Account B from a state machine in Account A.
But DynamoDB does not support resource based policies, is there a way to deploy a CloudFormation template where we create a DynamoDB table with a policy/permission that allows a state machine from another Account PutItem in it?
You have the gist of it, but are missing a small element that makes it possible.
Account A - contains:
Lambda that is part of a State Machine
Role A
Account B - Contains:
DynamoDb
Role B
You set up the lambda with Role A. You give Role A policy to assume Role B - you are not giving Role A any dynamo permissions, nor setting any resource based permisisons on the Dyanmo
You set up Role B with the ability to be assumed by Role A, and with DynamoDB access permissions.
You can now assume role B using your SDK of choice (sts) and resolve the security credentials, store them, and use them for your DynamoDB sdk calls inside your lambda in account A.
This is entirely possible, but one of the major drawbacks is that you have to be pretty explicit with cross account role arns - and if one side changes their arns, the system breaks. It is safer (and better in some ways) to set up an API with some basic CRUD operations to the Dynamo, and have the other account call it - unless you're trying to shave miliseconds this is generally good enough.
I have data that arrives in S3 Account A that i want to automatically copy to S3 Account B but do not understand how i can reference the files in Account A in my Lambda in Account B to do the copy.
Completed Steps so far:
1 Account B Inline policy added to Execution Role referencing Account A S3 bucket
2 Account B Permission given to Account A to invoke Lambda
3 Account A Bucket policy allowing S3 access to role execution Role Account B
4 Account A Event Notification to Account B Lambda (All ObjectCredte events)
Am i missing some steps or is here and if not how can my Lambda directly reference the individual files captured by the event?
Update due to comments:
From the question above, I'm not sure I understand the setup, but here's how I would approach this from an architectural perspective:
A Lambda function inside account A gets triggered by the S3 event when an object is uploaded.
The Lambda function retrieves the uploaded object from the source bucket
The Lambda function assumes a role in account B, which grants permission to write into the target bucket.
The Lambda function writes the object into the target bucket.
The permissions you need are:
An execution role for the Lambda function in account A that (a) grants permission to read from the source bucket and (b) grants permission to assume the role in account B (see next item below)
A cross-account role in account B, (a) trusting the above execution role and (b) granting permission to write into the target bucket
Note: Make sure to save the object granting bucket-owner-full-control so that account B has permissions to use the copied object.
If you want to replicate the objects to a bucket in a different AWS account and don't care about the fact that it can take up to 15 minutes for the replication to be done, you don’t need to build anything yourself. Simply use the Amazon S3 Replication feature.
Replication enables automatic, asynchronous copying of objects across
Amazon S3 buckets. Buckets that are configured for object replication
can be owned by the same AWS account or by different accounts. You can
copy objects between different AWS Regions or within the same Region.
I have a lambda which fetches data from Kinesis stream. When assigning the permissions, we give the lambda execution role a policy to access Kinesis stream. But, we don't give any permission to Kinesis that it allows that lambda the permission to get data from it? Why is it so?
Similarly, lambda with Dynamodb is the similar case. But when we do integrate lambda with Api gateway, in this case, we add permission to lambda that API gateway can invoke it.
I wanted to understand the basic concept of IAM permissions and roles which would define which resource we should give permissions and which one we shouldn't. I am quite naive while knowing these concepts of IAM. Any explanation on this thing you can give would be really helpful.
Lambda execution role grants it permission to access necessary AWS services and resources. Lambda will assume the role during execution.
That is why, as you mentioned, you give Kinesis (or) DynamoDB permissions because you perform operations on these services within lambda
However, the permission you add for API Gateway is a resource based policy to allow an API Gateway (or any AWS service) to invoke your function.
Reference:
https://docs.aws.amazon.com/lambda/latest/dg/lambda-permissions.html