I'm trying to create multiple S3 bucktes with same propeties.But I'm not able to create multiple s3 buckets.
I found in http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resources-section-structure.html
if you have multiple resources of the same type, you can declare them together by separating them with commas
But I didn't find any example and I'm not sure how to do it.I tried debugging but not getting the result.
Please suggest.
Below is my yaml file:
AWSTemplateFormatVersion: '2010-09-09'
Resources:
myS3Bucketlo:
Type: AWS::S3::Bucket
Properties:
AccessControl: AuthenticatedRead
Outputs:
WebsiteURL:
Value: !GetAtt myS3Bucketlo.WebsiteURL
Description: URL for the website hosted on S3
In a CloudFormation template, each resource must be declared separately. So, even if your buckets have identical properties, they still must be individually declared:
AWSTemplateFormatVersion: '2010-09-09'
Resources:
bucket1:
Type: AWS::S3::Bucket
Properties:
AccessControl: AuthenticatedRead
bucket2:
Type: AWS::S3::Bucket
Properties:
AccessControl: AuthenticatedRead
bucket3:
Type: AWS::S3::Bucket
Properties:
AccessControl: AuthenticatedRead
Outputs:
WebsiteURL1:
Value: !GetAtt bucket1.WebsiteURL
Description: URL for the website 1 hosted on S3
WebsiteURL2:
Value: !GetAtt bucket2.WebsiteURL
Description: URL for the website 2 hosted on S3
WebsiteURL3:
Value: !GetAtt bucket3.WebsiteURL
Description: URL for the website 3 hosted on S3
However,
You must declare each resource separately; however, if you have multiple resources of the same type, you can declare them together by separating them with commas.
The wording of this text does imply there is a shortcut to avoid duplication, but I have never seen such a working example.
Related
I have a CFN template where in I am creating 2 s3 buckets for the image resizing using CloudFront.
the issue is that I want to use an already existing bucket from s3 for these functions.
but I get an error that s3 already exists when I provide the resource ARN and other data.
how can I resolve this?
I tried giving the details ARN name etc and tried deploying but it doesn't work
Something like this would help you:
AWSTemplateFormatVersion: '2010-09-09'
Description: 'CFN template example for referencing existing S3 bucket to lambda'
Parameters:
myS3Bucket:
Type: String
Description: Provide the S3 bucket you want to referece into your lambda.
Resources:
MyLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Description: A lambda function
Handler: index.handler
Runtime: python3.7
Environment:
Variables:
S3_BUCKET: !Ref myS3Bucket
I have a yaml cloud formation file which requires a variable stored in ssm parameter. The yaml file is a CFT template. Below is the sample code,
AWSTemplateFormatVersion: 2010-09-09
Description: 'Fully Automated OT Archival Data Migration'
Parameters:
Environment:
Description: 'Stage type (for Tags)'
Type: String
Default: dev
Resources:
S3Bucket:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: '{{resolve:ssm:/opentext/config/automated-ot-archival-data-migration/migration.bucket.name:1}}-${Environment}'
When I upload the code to cloudformation in AWS console, I results with an error. I'm wondering whether the ssm param reference is correct or not.
Please let me know if you find any issues here.
Thanks
You are missing the !Sub function for your {Environment} variable.
BucketName: !Sub '{{resolve:ssm:/opentext/config/automated-ot-archival-data-migration/migration.bucket.name:1}}-${Environment}'
I want to create a deployment script for some lambda functions using AWS SAM. Two of those functions will be deployed into one account(account A) but will be triggered by an s3 bucket object creation event in a second account(account B). From what I know the only way to do this is by using adding a resource based policy to my lambda. But I don't know how to do that in AWS SAM. My current yaml file looks like this.
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
deploy-test-s3-triggered-lambda
Parameters:
AppBucketName:
Type: String
Description: "REQUIRED: Unique S3 bucket name to use for the app."
Resources:
S3TriggeredLambda:
Type: AWS::Serverless::Function
Properties:
Role: arn:aws:iam::************:role/lambda-s3-role
Handler: src/handlers/s3-triggered-lambda.invokeAPI
CodeUri: src/handlers/s3-triggered-lambda.js.zip
Runtime: nodejs10.x
MemorySize: 128
Timeout: 60
Policies:
S3ReadPolicy:
BucketName: !Ref AppBucketName
Events:
S3NewObjectEvent:
Type: S3
Properties:
Bucket: !Ref AppBucket
Events: s3:ObjectCreated:*
AppBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Ref AppBucketName
What do I need to add to this yaml file in order to tie a resource based policy that allows for cross account access to my lambda function?
This can be done achieved with the help of AWS::Lambda::Permission using aws_cdk.aws_lambda.CfnPermission.
For example, to allow your lambda to be called from a role in another account, add the following to your CDK:
from aws_cdk import aws_lambda
aws_lambda.CfnPermission(
scope,
"CrossAccountInvocationPermission",
action="lambda:InvokeFunction",
function_name="FunctionName",
principal="arn:aws:iam::111111111111:role/rolename",
)
If your bucket and your Lambda function exist in separate accounts I don't know if it's possible to modify both of them from SAM / a single CloudFormation template.
Don't think cross account s3 event is possible with SAM, may need to go back to CFN.
Can a S3 bucket and triggered Lambda be created in separate CloudFormation templates. I want to keep long running resources stack separate from the likes of Lambda which get updated quite frequently
When tried to create Lambda separately it says that bucket defined in lambda event should be defined in same template and cannot be referenced.
S3 events must reference an S3 bucket in the same template.
GetFileMetadata:
Type: AWS::Serverless::Function
Properties:
FunctionName: !Sub '${targetenv}-lambdaname'
CodeUri: target-file-0.0.1-SNAPSHOT.jar
Handler: LambdaFunctionHandler::handleRequest
Runtime: java8
Timeout: 30
MemorySize: 512
Environment:
Variables:
STAGE: !Sub '${targetenv}'
Events:
S3Event:
Type: S3
Properties:
Bucket:
Ref: MyS3Bucket
Events:
- 's3:ObjectCreated:*'
MyS3Bucket:
Type: 'AWS::S3::Bucket'
DependsOn: BucketPermission
Properties:
BucketName: !Sub 'bucketname-${targetenv}'
On November 21 2021, AWS announced S3 Event Notifications with Amazon EventBridge. Consequently, you can deploy one stack with an S3 bucket with EventBridge integration enabled and then a second stack with a Lambda function that is triggered by EventBridge events for the specific bucket.
Persistence Stack:
AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: 'Stack with S3 bucket with EventBridge event notification enabled'
Parameters:
BucketName:
Type: String
Description: 'Name of the bucket to be created'
Resources:
S3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Ref BucketName
NotificationConfiguration:
EventBridgeConfiguration:
EventBridgeEnabled: true
# Alternatively shorthand config
# EventBridgeConfiguration: {}
Application Stack:
AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Stack with Lambda for procesing S3 events via EventBridge
Parameters:
BucketName:
Type: String
Description: Name of the bucket to listen events from
Resources:
S3EventProcessor:
Type: AWS::Serverless::Function
Properties:
FunctionName: S3EventListener
Architectures:
- arm64
Runtime: nodejs14.x
Handler: index.handler
InlineCode: |
exports.handler = (event, context) => {
console.log('event:', JSON.stringify(event));
}
Events:
S3EventBridgeRule:
Type: EventBridgeRule
Properties:
Pattern:
source:
- aws.s3
detail:
bucket:
name:
- !Ref BucketName
By configuring the Pattern, you can filter the events stream for more specific events such as Object Create or Object Deleted, file names, file extension, etc. Please find more info in the EventBridge userguide
This could not be done when this answer was originally written, but there has been progress in this area. Since then, S3 has added support for SNS and SQS Event as AWS::S3::Bucket NotificationConfiguration which could be declared in one stack and then imported to the other stack. More recently, AWS has also added EventBridge as yet another option, please see my other answer.
This is not possible in SAM version 2016-10-31. Copied from the S3 event source type in the SAM documentation:
NOTE: To specify an S3 bucket as an event source for a Lambda function, both resources have to be declared in the same template. AWS SAM does not support specifying an existing bucket as an event source.
The template is creating a bucket (MyS3Bucket).
Then, the serverless function is referencing it:
Bucket:
Ref: MyS3Bucket
If you want to refer to that bucket from another template, you can export the bucket name from the first stack:
Outputs:
S3Bucket:
Description: Bucket that was created
Value: !Ref MyS3Bucket
Export:
Name: Stack1-Bucket
Then, import it into the second stack:
Bucket:
Fn::ImportValue:
Stack1-Bucket
See: Exporting Stack Output Values - AWS CloudFormation
We use Cloud Formation for define a bunch of Lambda functions:
AWSTemplateFormatVersion: '2010-09-09'
Transform:
- 'AWS::Serverless-2016-10-31'
Resources:
MyLambda:
Type: 'AWS::Serverless::Function'
Properties:
Handler: com.handler::MyLambda
Runtime: java8
CodeUri: .
Description: Some desc
MemorySize: 512
Timeout: 15
Role: !Ref LambdaRole
FunctionName: MyLambda
Events:
MyLambdaEvt:
Type: Api
Properties:
RestApiId: !Ref MyApiDef
Path: /lambda/my
Method: get
MyApiDef:
Type: AWS::Serverless::Api
Properties:
DefinitionUri: s3://a-bucket/api-gateway.yml
StageName: prod
Outputs:
ApiUrl:
Description: URL of your API endpoint
Value: !Join
- ''
- - https://
- !Ref MyApiDef
- '.execute-api.'
- !Ref 'AWS::Region'
- '.amazonaws.com/prod'
A CodePipeline generate a changeset and execute it.
In this way all the Lambda function are correctly updated but the API Gateway endpoint are not update correctly and we need to import and deploy the YML in s3://a-bucket/api-gateway.yml manually.
Why the API doesn't update (an educated guess)
In order for a change to be added to a a change set, CloudFormation has to detect a change. If the only thing that changes (for MyApiDef) between deployments is the contents of the .yaml file out on S3, CloudFormation isn't going to detect a change that it needs to add to the change set.
If this API definition lived in the CF template, rather than a file on S3, CF would (obviously) detect every change and update the API for you.
Since the definition lives in S3, and the file name hasn't changed, no change is detected, so nothing gets updated.
Possible work arounds
You have to convince CloudFormation that something has changed with your API definition. These two things worked for me:
Updating the MyApiDef key itself each run works. (MyApiDefv2,
MyApiDefv3, etc)
Updating the DefinitionUri works. (i.e. version the
filename in S3).
Neither of these is great, but appending a version to the filename in S3 seems more reasonable than the other option.
There are probably other ways to convince CloudFormation a change has taken place. Notably, I could not get Variables to work for this purpose.