I have a simple SAM template, and the following snippet grants access to an S3 bucket:
Policies:
- S3CrudPolicy:
BucketName: "bucket-a"
But I need to allow access to two buckets, bucket-a and bucket-b. How should I do it. The docs say the BucketName is a string. Does it accept an array or something?
Policies is an array. Thus the following should theoretically work:
Policies:
- S3CrudPolicy:
BucketName: "bucket-a"
- S3CrudPolicy:
BucketName: "bucket-b"
Related
After reading S3WritePolicy documentation, it's not clear if it allows multiple buckets.
I'm currently doing this:
SampleLambdaFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- S3WritePolicy:
BucketName: bucket-1
but if I wanted to include multiple buckets, i.e.:
SampleLambdaFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- S3WritePolicy:
BucketName:
- bucket-1
- bucket-2
would this be allowed?
Does S3WritePolicy allow multiple buckets in AWS SAM template?
Yes.
would this be allowed?
No, but the below would be allowed.
This is because it's a SAM policy template & is essentially generating a policy for a single bucket. You can however use it as many times as needed.
SampleLambdaFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- S3WritePolicy:
BucketName:
- bucket-1
- S3WritePolicy:
BucketName:
- bucket-2
I need to download terabytes data from S3 buckets in EC2 instances frequently. I would like to avoid unnecessary data transfer cross regions.
I am aware of
Example 1: Granting a user permission to create a bucket only in a specific Region. I tried the following:
InstanceRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: sts:AssumeRole
Effect: Allow
Principal:
Service: ec2.amazonaws.com
Version: "2012-10-17"
Policies:
- PolicyDocument:
Statement:
- Action: s3:*
Condition:
StringLike:
s3:LocationConstraint: sa-east-1
Effect: Allow
Resource: arn:aws:s3:::*
Version: "2012-10-17"
PolicyName: s3
InstanceInstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Roles:
- Ref: InstanceRole
Instance:
Type: AWS::EC2::Instance
Properties:
IamInstanceProfile:
Ref: InstanceInstanceProfile
......
DependsOn:
- InstanceRole
However, all S3 buckets deny my access from the EC2 instance launched in sa-east-1, no matter if the buckets are in sa-east-1 or not.
Is there a complete and working example for my case?
LocationConstraint only works for CreateBucket and CreateAccessPoint. See: Actions, resources, and condition keys for Amazon S3 - Service Authorization Reference
The easiest approach would probably be:
Add an Allow policy that grants all relevant access to S3, then
Add a Deny policy specifically for s3:CreateBucket where s3:LocationConstraint is NOT sa-east-1
Try to avoid granting s3:* because this also grants permission to delete every bucket and all objects in the account!
Thank bgdnlp a lot of the suggestion. Following the link aws:RequestedRegion
therein, the problem is solved by replacing Condition by
"Condition": {
"StringEquals": {
"aws:RequestedRegion": ["sa-east-1"]
}
}
I am trying to create a lambda function with function code in the S3 bucket. Below is my template.
This template creates the lambda but, not the S3 bucket mentioned. I am looking for assistance to create S3 bucket through this template.
Resources:
ProducerLambda:
Type: 'AWS::Serverless::Function'
Properties:
FunctionName: !Sub '${ApplicationId}-${env}-Producer-Lambda-${AWS::AccountId}-${AWS::Region}'
Handler: index.handler
Runtime: nodejs14.x
CodeUri:
Bucket: s3bucket
Key: s3bucketref.zip
Role: 'arn:aws:iam::${AWS::AccountId}:role/Producer-lambda-trigger-role'
VpcConfig:
SecurityGroupIds: !Ref SecurityGroup
SubnetIds: !Ref VPCSubnetId
Environment:
Variables:
Region: !Sub '${AWS::Region}'
CodeUri is used to specify the path to the function's code - this can be an Amazon S3 URI, the path to a local folder, or a FunctionCode object.
They are not used to create S3 buckets.
If the packaged function does not exist at s3bucket/s3bucketref.zip then you will have to create the bucket yourself and upload the package.
Or alternatively, using sam build will build the Lambda for you & sam deploy will then automatically create an S3 bucket for you:
Deploying AWS Lambda functions through AWS CloudFormation requires an Amazon Simple Storage Service (Amazon S3) bucket for the Lambda deployment package. The AWS SAM CLI creates and manages this Amazon S3 bucket for you.
The latter is much simpler to manage.
To preface this, I'm very new to Cloud Formation. I'm trying to build a template that will deploy a fairly simple environment, with two services.
I need to have an S3 bucket that triggers a message to SQS whenever an object is created. When creating these assets, the S3 configuration must include a pointer to the SQS queue. But the SQS Queue must have a policy that specifically allows the S3 bucket permission. This creates a circular dependency. In order to break this circle I would like to do the following:
Create S3 bucket
Create SQS queue, reference S3 bucket
Modify the S3 bucket to reference SQS queue.
When I try this I get an error telling me it can't find the SQS queue. When I put a DependsOn command in #3 it errors out in a circular dependency.
Can you declare a resource , the re-declare it with new parameters later in the template? If so, how would you do that. Am I approaching this wrong?
What leads to circular dependencies in such scenarios is the use of intrinsic functions like Ref or Fn::GetAtt, which require the reference resources to be available. To avoid this, you can specify a resource ARN without referring to a resource. Here is an example template where CloudFormation does the following:
Create a queue
Add a queue policy to grant permissions to a non-existent bucket
Create the bucket
Template:
Parameters:
BucketName:
Description: S3 Bucket name
Type: String
Default: mynewshinybucket
Resources:
Queue:
Type: AWS::SQS::Queue
QueuePolicy:
Type: AWS::SQS::QueuePolicy
Properties:
Queues:
- !Ref Queue
PolicyDocument:
Statement:
- Effect: Allow
Action: SQS:SendMessage
Resource: !GetAtt Queue.Arn
Principal:
AWS: '*'
Condition:
ArnLike:
# Specify bucket ARN by referring to a parameter instead of the actual bucket resource which does not yet exist
aws:SourceArn: !Sub arn:aws:s3:::${BucketName}
Bucket:
Type: AWS::S3::Bucket
# Create the bucket after the queue policy to avoid "Unable to validate the following destination configurations" errors
DependsOn: QueuePolicy
Properties:
BucketName: !Ref BucketName
NotificationConfiguration:
QueueConfigurations:
- Event: 's3:ObjectCreated:Put'
Queue: !GetAtt Queue.Arn
Edit:
When using Ref/GetAtt/Sub to retrieve values from another resource, all of them require this resource to be available.
CloudFormation will make sure that the resource that uses the function will always be created after the reference resource. This way circular dependencies are detected.
Sub is used for string substitution but works exactly as a Ref when used with parameters or resources (Source).
The point is that we are referring to a parameter (and not a resource), which are always available.
Using Sub is a bit simpler in this case, because using Ref would require an additional Join. For example this would give you the same result:
aws:SourceArn: !Join
- ''
- - 'arn:aws:s3:::'
- !Ref BucketName
Another way would be to hard-code the bucket ARN without using any intrinsic functions. The important thing is not to reference the bucket itself to avoid the circular dependency.
I want to create a deployment script for some lambda functions using AWS SAM. Two of those functions will be deployed into one account(account A) but will be triggered by an s3 bucket object creation event in a second account(account B). From what I know the only way to do this is by using adding a resource based policy to my lambda. But I don't know how to do that in AWS SAM. My current yaml file looks like this.
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
deploy-test-s3-triggered-lambda
Parameters:
AppBucketName:
Type: String
Description: "REQUIRED: Unique S3 bucket name to use for the app."
Resources:
S3TriggeredLambda:
Type: AWS::Serverless::Function
Properties:
Role: arn:aws:iam::************:role/lambda-s3-role
Handler: src/handlers/s3-triggered-lambda.invokeAPI
CodeUri: src/handlers/s3-triggered-lambda.js.zip
Runtime: nodejs10.x
MemorySize: 128
Timeout: 60
Policies:
S3ReadPolicy:
BucketName: !Ref AppBucketName
Events:
S3NewObjectEvent:
Type: S3
Properties:
Bucket: !Ref AppBucket
Events: s3:ObjectCreated:*
AppBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Ref AppBucketName
What do I need to add to this yaml file in order to tie a resource based policy that allows for cross account access to my lambda function?
This can be done achieved with the help of AWS::Lambda::Permission using aws_cdk.aws_lambda.CfnPermission.
For example, to allow your lambda to be called from a role in another account, add the following to your CDK:
from aws_cdk import aws_lambda
aws_lambda.CfnPermission(
scope,
"CrossAccountInvocationPermission",
action="lambda:InvokeFunction",
function_name="FunctionName",
principal="arn:aws:iam::111111111111:role/rolename",
)
If your bucket and your Lambda function exist in separate accounts I don't know if it's possible to modify both of them from SAM / a single CloudFormation template.
Don't think cross account s3 event is possible with SAM, may need to go back to CFN.