Step functions cross account access to DynamoDB - amazon-web-services

Just wondering if anyone has done encountered the following use case:
Account A has a step functions state machine
Account B has a DynamoDB table
Allow the state machine from Account A to PutItem into DynamoDB table in Account B
I know if we use Lambda with step functions, it allows resource based policies and we can allow "Principal" in Lambda as the state machine arn from another account and execute the lambda function in Account B from a state machine in Account A.
But DynamoDB does not support resource based policies, is there a way to deploy a CloudFormation template where we create a DynamoDB table with a policy/permission that allows a state machine from another Account PutItem in it?

You have the gist of it, but are missing a small element that makes it possible.
Account A - contains:
Lambda that is part of a State Machine
Role A
Account B - Contains:
DynamoDb
Role B
You set up the lambda with Role A. You give Role A policy to assume Role B - you are not giving Role A any dynamo permissions, nor setting any resource based permisisons on the Dyanmo
You set up Role B with the ability to be assumed by Role A, and with DynamoDB access permissions.
You can now assume role B using your SDK of choice (sts) and resolve the security credentials, store them, and use them for your DynamoDB sdk calls inside your lambda in account A.
This is entirely possible, but one of the major drawbacks is that you have to be pretty explicit with cross account role arns - and if one side changes their arns, the system breaks. It is safer (and better in some ways) to set up an API with some basic CRUD operations to the Dynamo, and have the other account call it - unless you're trying to shave miliseconds this is generally good enough.

Related

Cross account CW query from python lambda

I have two accounts A and B. I have a service in Account B that writes its logs to CloudWatch. In account A I have a AWS lambda that periodically needs to run a CloudWatch insights query to retrieve logs that match a pattern.
I can't seem to find a way to setup permissions for this or how to make a cross account cloud watch query from Lambda in Account A to CloudWatch logs in Account B. Is this even possible? If so, how?
You can do it, using cross-account access IAM role, assuming the role from the B account.
A good detailed explanation with examples can be found here.
Essentially, you have to assume a role from account B which allows your Lambda function in account A to access certain resources in account B. In the trust policy of the IAM role your AWS account A ID has to be set, so your Lambda can access resources based on what the account B policy allows.

dynamic trust relationship / assume_role_policy with terraform and AWS IAM roles

I am busy experimenting with a way to authenticate + authorize multiple principals for actions in AWS. The most direct approach is to create an IAM role per principal, once the principal exists, with corresponding policies.
The principal type is an federated identity - an OIDC provider created during EKS deployments. I am looking for a way to create a single role in an account (via terraform), and then (if possible) have a way to add in a principal for each EKS cluster that is provisioned - similarly to how a security group can be created, and then rules can be added to it using aws_security_group_rule from a different module / root.
One approach I can think of is to use an aws_iam_role data object and reference the assume_role_policy, and then somehow (maybe in locals) modify it to add in the OIDC provider - maybe with some sort of condition to check that it isn't there. This feels like a messy approach though (especially when it comes to destroys). The state for the IAM role itself will never be aligned either.... and there's probably a security concern around many principals assuming the same role (e.g. which EKS cluster was responsible for certain actions).
Has anyone gone down this path, and come up with any patterns that worked? Or would it be more appropriate to just create new role for each cluster with that cluster's provider as the single principal.
Thanks in advance.

What is the right iam role to execute lambda in the console

I want to provide a freelancer the ability to test, debug and deploy lambda functions in the console.
However the roles i saw until now are very restrictive (only logging) or very wide like AWSLambdaFullAccess: full S3 access(?)
What is the right role here, or do i have to create a custom one?
There are two sets of permissions here.
First, there are the permissions that you are giving the freelancer. These should be sufficient to test, debug and deploy the Lambda function. You might want to limit which functions they are allowed to edit (eg based on a prefix in the function name).
Second, there is the IAM Role that is associated with the Lambda function. This will need sufficient permission to perform whatever task the Lambda function doing (eg accessing Amazon S3).
The freelancer will probably need iam:PassRole permission to be able to select an IAM Role for the Lambda functions (or I wonder if you can set that, and they simply cannot edit the role?).
Be very careful when you assign the freelancer iam:PassRole permission because if you do not limit which roles they can pass to Lambda, then they can effectively gain access to any IAM Role in your system (including those for Admins). You should limit which Roles they can pass.

Connect to resources of 2 different AWS accounts via roles

Consider 2 AWS accounts A ( Other team ) and B ( Mine )
For my use case, I have to poll to queues in A's AWS account for payload and perform database operations in B.
To do this, I have a role created in B having access to my databases + A's account ID as trusted relationship.
Likewise A has created a role for granting access to queue in A + added B as trusted relationship.
Code for doing all the polling + database action will reside on EC2 in my account ( B ).
Now how do I consume payloads and perform operations from a role's perspective. Is my understanding correct?
Assume role A
Poll to provisioning queue, get the JSON payload from SQS
Assume role B
Perform database operations in Oracle RDS
Assume role A
Return back the response to response SQS
Start polling again on provisioning queue SQS
FYI : I am performing the above operations using Python + Boto3
Instead of assuming roles, you can grant Cross-Account access to the queues in Account A to the user/role in Account B.
In your case, grant it to EC2 instance role and then you'll be able to do all the required operations from within the instance without any "Assume role" tricks.
You don't say which programming language or SDK you are using, but essentially you can create two client/service objects, one leveraging credentials from role A and the other from role B. Then simply make API calls using the appropriate client/service object.
Using boto3, for example:
sqs_accounta = boto3.client(
'sqs',
region_name='us-east-1',
aws_access_key_id=xxx,
aws_secret_access_key=yyy,
aws_session_token=zzz
)
rds_accountb = boto3.client(
'rds',
region_name='us-west-2',
aws_access_key_id=aaa,
aws_secret_access_key=bbb,
aws_session_token=ccc
)
Pretend for a moment that everything was happening in your own account (Account-B).
You would give a set of credentials to your code (either an IAM User or, if the code is running on an Amazon EC2 instance, you would assign an IAM Role to the instance) that it can use to access the necessary resources in Account B. So, no problem there.
The only additional requirement is that you wish to access Amazon SQS in Account-A. It so happens that you can add permissions directly to an Amazon SQS queue that grants cross-account access.
See: Basic Examples of IAM Policies for Amazon SQS - Amazon Simple Queue Service
So, you do not actually need to assume any roles. Just use the normal credentials that are assigned to your code, and add permissions to the SQS queue to allow that particular IAM User or IAM Role to use the queue.

AWS access to resources with cross account and IAC setup

I have the following scenario:
We are build an infrastructure on AWS with infrastructure as code (IAC) on multiple accounts.
I have account A and account B in the beginning, and I want to create the infrastructure in both using CloudFormation and Terraform. When account A is created, I want to allow a role in account B to have access to a S3 bucket, that is created in account A. The role in account B is not yet created, however, I do know, what the name is eventually going to be.
My question: Can I grant access to non-existing resources between both accounts, if I do know how they are going to be named eventually?
OR: Do I have to create the resources before I can grant the access?
Stack Sets would allow you to run manage stacks in multiple accounts and regions with CloudFormation. CloudFormation is pretty smart about dependencies, but you can explicitly use the DependsOn attribute to have resources wait for the dependent resource to be ready, like the IAM Role for cross-account access in this case.