I am attempting to create a CloudFormation template for an AWS lambda service and I'm running into a "chicken or the egg" scenario between the s3 bucket holding my lambda code, and the lambda function calling said bucket.
The intent is for our lambda code to be built to a jar, which will be hosted in an S3 Bucket, and our lambda function will reference that bucket. However when I run the template (using the CLI aws cloudformation create-stack --template-body "file://template.yaml"), I run into the following error creating the lambda function:
CREATE_FAILED Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist. (Service: AWSLambdaInternal; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: ...; Proxy: null)
I believe this is happening because cloudformation is building both the bucket and lambda in the same transaction, and I can't stop it in the middle to push content into the brand new bucket.
I can't be the only one that has this problem, so I'm wondering if there's a common practice for tackling it? I'd like to keep all my configuration in a single template file if possible, but the only solutions I'm coming up with would require splitting the stack creation into multiple steps. (e.g. build the bucket first, deploy my code to it, then create the rest of the stack.) Is there a better way to do this?
template.yaml (the relevant bits)
...
myS3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub "${AWS::StackName}"
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
AccessControl: Private
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
VersioningConfiguration:
Status: Enabled
myLambdaFunction:
Type: AWS::Lambda::Function
Properties:
FunctionName: !Sub "${AWS::StackName}-dec"
Handler: "lambda.Handler"
Role: !GetAtt myLambdaExecutionRole.Arn
Code:
S3Bucket: !Ref myS3Bucket
S3Key: "emy-lambda-fn.jar"
Runtime: "java8"
Timeout: 90
MemorySize: 384
Environment:
Variables:
stackName: !Sub "${AWS::StackName}"
...
I'm coming up with would require splitting the stack creation into multiple steps. [...] Is there a better way to do this?
Splitting template into two is the most logical and easiest way of doing what you are trying to do.
There are some alternatives that would allow you to keep everything in one template, but they are more difficult to implement, manage and simply use. One alternative would be to develop a custom resources. The resource would be in the form of a lambda function that would get invoked after the bucket creation. The lambda would wait and check for existence of your emy-lambda-fn.jar in the bucket, and when the key is uploaded (within 15 min max), the function returns, and your stack creation continues. This means that your myLambdaFunction would be creating only after the custom resource returns, ensuring that emy-lambda-fn.jar exists.
Related
I have a CFN template where in I am creating 2 s3 buckets for the image resizing using CloudFront.
the issue is that I want to use an already existing bucket from s3 for these functions.
but I get an error that s3 already exists when I provide the resource ARN and other data.
how can I resolve this?
I tried giving the details ARN name etc and tried deploying but it doesn't work
Something like this would help you:
AWSTemplateFormatVersion: '2010-09-09'
Description: 'CFN template example for referencing existing S3 bucket to lambda'
Parameters:
myS3Bucket:
Type: String
Description: Provide the S3 bucket you want to referece into your lambda.
Resources:
MyLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Description: A lambda function
Handler: index.handler
Runtime: python3.7
Environment:
Variables:
S3_BUCKET: !Ref myS3Bucket
I am trying to set up the Inventory configuration for an S3 bucket with CloudFormation. I want to get daily inventories of data in one subfolder, and have the inventories written to a different subfolder in the same bucket. I have defined the bucket as follows:
S3Bucket:
Type: AWS::S3::Bucket
Properties:
# ...other properties...
InventoryConfigurations:
- Id: runs
Enabled: true
Destination:
BucketAccountId: !Ref AWS::AccountId
BucketArn: !GetAtt S3Bucket.Arn
Format: CSV
Prefix: inventory/runs/
IncludedObjectVersions: Current
OptionalFields: [ETag, Size, BucketKeyStatus]
Prefix: runs/
ScheduleFrequency: Daily
Unfortunately, the !GetAtt S3Bucket.Arn line seems to be failing, causing an error message like "Error: Failed to create changeset for the stack: , ex: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state: For expression "Status" we matched expected path: "FAILED" Status: FAILED. Reason: Circular dependency between resource". If I use the actual ARN of the bucket in place of !GetAtt S3Bucket.Arn (it already exists from a previous version of the stack), then the deploy succeeds, so I know buckets can write Inventories to themselves.
So I guess my question is, is there a way to let Cfn resources call !GetAtt on themselves, so I don't have to hard-code the bucket ARN in InventoryConfigurations? Thanks in advance!
Can AWS CloudFormation resources call !GetAtt on themselves?
Unfortunately no, as the !GetAtt is used to reference other resources in the stack as you've experienced (other as in concrete resources that have already been created).
However, in your case, considering you know the bucket name, you could just construct the bucket ARN yourself directly.
Format:
arn:aws:s3:::bucket_name
e.g. if the name is test, you can use arn:aws:s3:::test
Destination:
BucketAccountId: !Ref AWS::AccountId
BucketArn: 'arn:aws:s3:::test'
I'm new to SAM templates. I have the following snippet of my SAM Template where I used to pass the name of bucket name as a parameter from outside of this SAM YAML file :-
SAM Template:-
MyLambda:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./functions/test/dist/
Handler: index.lambdaHandler
Runtime: nodejs12.x
Events:
S3PutObjectEvent:
Type: S3
Properties:
Bucket: !Ref S3BucketName
Events: s3:ObjectCreated:*
Parameter.YAML:-
DeploymentEnvironment:
default:
S3BucketName: my-awesome-s3-bucket
Now, I do not create any S3 Bucket using SAM Template or Infrastructure as a code (IaC). Bucket creation is done by Lambda code itself hence there is no S3 Object Type declaration in my SAM Template.
When I execute this command, sam validate to validate the SAM Template then I get this error:-
/template.yaml' was invalid SAM Template.
Error: [InvalidResourceException('MyLambda', 'Event with id [S3PutObjectEvent] is invalid. S3 events must reference an S3 bucket in the same template.')] ('MyLambda', 'Event with id [S3PutObjectEvent] is invalid. S3 events must reference an S3 bucket in the same template.')
I really need your help in achieving this as I tried hard in getting it solved. I read various forums, not sure if we can pass the bucket name from outside of the SAM template or not.
Is there any way workaround? This is really critical issue for me. Appreciate your help on this. thanks
Bucket creation is done by Lambda code itself
I'd recommend against this pattern, as your Lambda even source won't get created if the Bucket doesn't already exist.
Try creating the bucket in your SAM template, and pass the bucket name to your function as an environment variable.
Optionally you can set different environment names on your bucket name (addressing comment) using Parameters.
Parameters:
Env:
Type: String
AllowedValues:
- dev
- qa
- prod
Default: dev
Resources:
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub 'My-unique-bucket-name-${Env}'
MyLambda:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./functions/test/dist/
Handler: index.lambdaHandler
Runtime: nodejs12.x
Environment:
Variables:
BUCKET_NAME: !Ref MyBucket # passed to Lambda as environment variable
Events:
S3PutObjectEvent:
Type: S3
Properties:
Bucket: !Ref MyBucket
Events: s3:ObjectCreated:*
And get the bucket name in your function
const bucket = process.env.BUCKET_NAME
I want to create a deployment script for some lambda functions using AWS SAM. Two of those functions will be deployed into one account(account A) but will be triggered by an s3 bucket object creation event in a second account(account B). From what I know the only way to do this is by using adding a resource based policy to my lambda. But I don't know how to do that in AWS SAM. My current yaml file looks like this.
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
deploy-test-s3-triggered-lambda
Parameters:
AppBucketName:
Type: String
Description: "REQUIRED: Unique S3 bucket name to use for the app."
Resources:
S3TriggeredLambda:
Type: AWS::Serverless::Function
Properties:
Role: arn:aws:iam::************:role/lambda-s3-role
Handler: src/handlers/s3-triggered-lambda.invokeAPI
CodeUri: src/handlers/s3-triggered-lambda.js.zip
Runtime: nodejs10.x
MemorySize: 128
Timeout: 60
Policies:
S3ReadPolicy:
BucketName: !Ref AppBucketName
Events:
S3NewObjectEvent:
Type: S3
Properties:
Bucket: !Ref AppBucket
Events: s3:ObjectCreated:*
AppBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Ref AppBucketName
What do I need to add to this yaml file in order to tie a resource based policy that allows for cross account access to my lambda function?
This can be done achieved with the help of AWS::Lambda::Permission using aws_cdk.aws_lambda.CfnPermission.
For example, to allow your lambda to be called from a role in another account, add the following to your CDK:
from aws_cdk import aws_lambda
aws_lambda.CfnPermission(
scope,
"CrossAccountInvocationPermission",
action="lambda:InvokeFunction",
function_name="FunctionName",
principal="arn:aws:iam::111111111111:role/rolename",
)
If your bucket and your Lambda function exist in separate accounts I don't know if it's possible to modify both of them from SAM / a single CloudFormation template.
Don't think cross account s3 event is possible with SAM, may need to go back to CFN.
Can a S3 bucket and triggered Lambda be created in separate CloudFormation templates. I want to keep long running resources stack separate from the likes of Lambda which get updated quite frequently
When tried to create Lambda separately it says that bucket defined in lambda event should be defined in same template and cannot be referenced.
S3 events must reference an S3 bucket in the same template.
GetFileMetadata:
Type: AWS::Serverless::Function
Properties:
FunctionName: !Sub '${targetenv}-lambdaname'
CodeUri: target-file-0.0.1-SNAPSHOT.jar
Handler: LambdaFunctionHandler::handleRequest
Runtime: java8
Timeout: 30
MemorySize: 512
Environment:
Variables:
STAGE: !Sub '${targetenv}'
Events:
S3Event:
Type: S3
Properties:
Bucket:
Ref: MyS3Bucket
Events:
- 's3:ObjectCreated:*'
MyS3Bucket:
Type: 'AWS::S3::Bucket'
DependsOn: BucketPermission
Properties:
BucketName: !Sub 'bucketname-${targetenv}'
On November 21 2021, AWS announced S3 Event Notifications with Amazon EventBridge. Consequently, you can deploy one stack with an S3 bucket with EventBridge integration enabled and then a second stack with a Lambda function that is triggered by EventBridge events for the specific bucket.
Persistence Stack:
AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: 'Stack with S3 bucket with EventBridge event notification enabled'
Parameters:
BucketName:
Type: String
Description: 'Name of the bucket to be created'
Resources:
S3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Ref BucketName
NotificationConfiguration:
EventBridgeConfiguration:
EventBridgeEnabled: true
# Alternatively shorthand config
# EventBridgeConfiguration: {}
Application Stack:
AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Stack with Lambda for procesing S3 events via EventBridge
Parameters:
BucketName:
Type: String
Description: Name of the bucket to listen events from
Resources:
S3EventProcessor:
Type: AWS::Serverless::Function
Properties:
FunctionName: S3EventListener
Architectures:
- arm64
Runtime: nodejs14.x
Handler: index.handler
InlineCode: |
exports.handler = (event, context) => {
console.log('event:', JSON.stringify(event));
}
Events:
S3EventBridgeRule:
Type: EventBridgeRule
Properties:
Pattern:
source:
- aws.s3
detail:
bucket:
name:
- !Ref BucketName
By configuring the Pattern, you can filter the events stream for more specific events such as Object Create or Object Deleted, file names, file extension, etc. Please find more info in the EventBridge userguide
This could not be done when this answer was originally written, but there has been progress in this area. Since then, S3 has added support for SNS and SQS Event as AWS::S3::Bucket NotificationConfiguration which could be declared in one stack and then imported to the other stack. More recently, AWS has also added EventBridge as yet another option, please see my other answer.
This is not possible in SAM version 2016-10-31. Copied from the S3 event source type in the SAM documentation:
NOTE: To specify an S3 bucket as an event source for a Lambda function, both resources have to be declared in the same template. AWS SAM does not support specifying an existing bucket as an event source.
The template is creating a bucket (MyS3Bucket).
Then, the serverless function is referencing it:
Bucket:
Ref: MyS3Bucket
If you want to refer to that bucket from another template, you can export the bucket name from the first stack:
Outputs:
S3Bucket:
Description: Bucket that was created
Value: !Ref MyS3Bucket
Export:
Name: Stack1-Bucket
Then, import it into the second stack:
Bucket:
Fn::ImportValue:
Stack1-Bucket
See: Exporting Stack Output Values - AWS CloudFormation