I am creating a cloudformation template, with few resources, couple of lambda functions , S3 bucket.see code below, it is work on progress and so far I have a S3 bucket and a lamda function triggered by S3. we have vpc defined in our team that we are supposed to use. I would like to add private subnet under that vpc for my lambda function and assign public subnet for the s3 bucket. how to get reference of the vpc , and pass it to my template and use it? sample code will be helpful.
AWSTemplateFormatVersion: 2010-09-09
Resources:
# S3 Bucket
S3Bucket:
Type: AWS::S3::Bucket
# Functions
S3-Lambda-trigger:
Type: AWS::Serverless::Function
Properties:
CodeUri: .
Handler: lambda.handler
Description: s3 object creation triggers lambda
Runtime: nodejs12.x
Events:
S3Bucket:
Type: S3
Properties:
Bucket: !Ref S3Bucket
Events: 's3:ObjectCreated:*'
# Permissions
Allow-lamda-invocation-s3:
Type: AWS::Lambda::Permission
Properties:
Action: 'lambda:InvokeFunction'
FunctionName: !Ref S3-Lambda-trigger
Principal: s3.amazonaws.com
SourceArn: !GetAtt S3Bucket.Arn
how to get reference of the vpc , and pass it to my template and use it?
One way would be through AWS-Specific Parameter Types, specifically AWS::EC2::VPC::Id, in a Parameters section.
For example:
AWSTemplateFormatVersion: 2010-09-09
Parameters:
VPCId:
Type: AWS::EC2::VPC::Id
Resources:
MySubnet:
Type: AWS::EC2::Subnet
Properties:
# other properties
VpcId: !Ref VPCId
Thanks to this, when creating the stack in AWS Console, you would be able to choose existing VPCId to pass to the template.
Related
I am trying to get my lambda to run when an image is added to a "folder" in an s3 bucket. Here is the template
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: 1. Creates the S# bucket that sotres the images from the camera.\n
2. Resizes the images when a new image shows up from a camera.\n
3. Adds a record of the image in the DB.
Globals:
Function:
Timeout: 10
Parameters:
DeploymentStage:
Type: String
Default: production
Resources:
CameraImagesBucket:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: !Sub
- com.wastack.camera.images.${stage}
- { stage: !Ref DeploymentStage }
CreateThumbnailFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: image_resize/
Handler: app.lambda_handler
Runtime: python3.8
Description: Creates a thumbnail of images in the camare_images bucket
Policies:
- S3ReadPolicy:
BucketName: !Sub
- com.wastack.camera.images.${stage}
- { stage: !Ref DeploymentStage }
- S3WritePolicy:
BucketName: !Sub
- com.wastack.camera.images.${stage}
- { stage: !Ref DeploymentStage }
Events:
CameraImageEvent:
Type: S3
Properties:
Bucket:
Ref: CameraImagesBucket
Events:
- 's3:ObjectCreated:*'
Filter:
S3Key:
Rules:
- Name: prefix
Value: camera_images
When I look at the lambda created on the AWS console, I do not see the trigger even in the lambda visualiser. The lambda doesn't event have the s3 read and write policies attached to it.
The s3 bucket and the lambda are created, but the policies and triggers that are supposed to connect them are not created.
I did not get any error when I run sam deploy
Question: why did it not attach the s3 trigger event or the s3 access policies to the lambda function?
Policies for s3 So the template is straight forward. If you place the full template in does it work. If that is also failing, check the permissions on what you're running SAM as. Also there's an open ticket on github, This appears to be your issue. See comments.
I'm stuck in a weird issue. I have created an AWS S3 bucket using following cloud formation template:-
AWSTemplateFormatVersion: '2010-09-09'
Metadata:
License: Unlicensed
Description: >
This template creates a global unique S3 bucket in a specific region which is unique.
The bucket name is formed by the environment, account id and region
Parameters:
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html
Environment:
Description: This paramenter will accept the environment details from the user
Type: String
Default: sbx
AllowedValues:
- sbx
- dev
- qa
- e2e
- prod
ConstraintDescription: Invalid environment. Please select one of the given environments only
Resources:
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket.html
MyS3Bucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
BucketName: !Sub 'global-bucket-${Environment}-${AWS::Region}-${AWS::AccountId}' # https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/pseudo-parameter-reference.html
AccessControl: Private
LoggingConfiguration:
DestinationBucketName: !Ref 'LoggingBucket'
LogFilePrefix: 'access-logs'
Tags:
- Key: name
Value: globalbucket
- Key: department
Value: engineering
LoggingBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
BucketName: !Sub 'global-loggings-${Environment}-${AWS::Region}-${AWS::AccountId}'
AccessControl: LogDeliveryWrite
Outputs:
GlobalS3Bucket:
Description: A private S3 bucket with deletion policy as retain and logging configuration
Value: !Ref MyS3Bucket
Export:
Name: global-bucket
If you note in the template above then I'm exporting this S3 bucket in the Outputs section by the name called global-bucket.
Now, my intention is to refer to this existing bucket going forward in my AWS account whenever any new resource like Lambda, etc wants an S3 bucket. Here is an example using AWS SAM (Serverless Application Model), I'm trying to create an AWS Lambda and trying to refer to this existing S3 bucket using property !ImportValue and the export name as global-bucket as shown below:-
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
hellolambda
Sample SAM Template for hellolambda
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 3
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: hello-world/
Handler: app.lambdaHandler
Runtime: nodejs12.x
Events:
HelloLambdaEvent:
Type: S3
Properties:
Bucket: !Ref SrcBucket
Events: s3:ObjectCreated:*
SrcBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !ImportValue global-bucket
Now, the problem is when I execute the command like sam build and then sam deploy --guided and select the same region (where my previous CloudFormation stack output is present) then I get the following error:-
global-bucket-sbx-ap-southeast-1-088853283839 already exists in stack arn:aws:cloudformation:ap-southeast-1:088853283839:stack/my-s3-global-bucket/aabd20e0-f57d-11ea-80bf-06f1487f6a64
The screenshot below:-
The problem is AWS CloudFormation is trying to create the S3 bucket rather than referring to the existing one.
But, if I try to update this SAM template like and then execute sam deploy, I get the following error:-
Waiting for changeset to be created..
Error: Failed to create changeset for the stack: my-lambda-stack, ex: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Number of errors found: 1. Resource with id [HelloWorldFunction] is invalid. Event with id [HelloLambdaEvent] is invalid. S3 events must reference an S3 bucket in the same template.
I'm blocked by both ends. I would really appreciate it if someone can assist to guide me writing the SAM template correctly in my Lambda so that I can refer the existing bucket properly instead of creating the new one.
Thank you
Any items listed under the Resources section refer to the resources the stack is responsible for maintaining.
When you list SrcBucket you are asking for CloudFormation to create a new S3 bucket with the name being the value of !ImportValue global-bucket which is the name of an S3 bucket you have already created.
Assuming that this is the bucket name you can simply reference it in your template as shown below.
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
hellolambda
Sample SAM Template for hellolambda
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 3
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: hello-world/
Handler: app.lambdaHandler
Runtime: nodejs12.x
Events:
HelloLambdaEvent:
Type: S3
Properties:
Bucket: !ImportValue global-bucket
Events: s3:ObjectCreated:*
I have created a cloudformation template to configure a S3 bucket with an event notification that will call a lambda function. The lamba is triggered whenever a new object is created in the bucket.
The problem I have is when I delete the stack the bucket is also deleted. For debugging and testing purpose I had to delete the stack.
AWSTemplateFormatVersion: '2010-09-09'
Description: Upload an object to an S3 bucket, triggering a Lambda event, returning the object key as a Stack Output.
Parameters:
Body:
Description: Stack to create s3 bucket and the lambda trigger
Type: String
Default: Test
BucketName:
Description: S3 Bucket name
Type: String
Default: image-process-bucket
Resources:
ImageProcessorExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: 'sts:AssumeRole'
Path: /
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
Policies:
- PolicyName: S3Policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 's3:PutObject'
- 'S3:DeleteObject'
Resource: !Sub "arn:aws:s3:::${BucketName}/*"
ImageProcessor:
Type: AWS::Lambda::Function
Properties:
Description: Prints the filename
Handler: imageProcessor.handler
Role: !GetAtt ImageProcessorExecutionRole.Arn
Code: .
Runtime: nodejs12.x
Environment:
Variables:
BucketName:
Ref: BucketName
Bucket:
Type: AWS::S3::Bucket
DependsOn: BucketPermission
Properties:
BucketName: !Ref BucketName
NotificationConfiguration:
LambdaConfigurations:
- Event: 's3:ObjectCreated:*'
Function: !GetAtt ImageProcessor.Arn
BucketPermission:
Type: AWS::Lambda::Permission
Properties:
Action: 'lambda:InvokeFunction'
FunctionName: !Ref ImageProcessor
Principal: s3.amazonaws.com
SourceAccount: !Ref "AWS::AccountId"
SourceArn: !Sub "arn:aws:s3:::${BucketName}"
To resolve this, I separated the two resources on separate template using Outputs. Problem with this is that I cannot delete Lambda function stack because it is being referenced by the Bucket stack.
I want to know what is the right approach. Is it really required to separate these two resources. I believe lambda function is required to be changed frequently.
If yes what is the correct way to do it.
If not how should I handle the necessity to makes changes.
The approach using Outputs and Imports will always create the dependencies and will not allow to delete. This is a generic behavior in any resources. How do we deal with deleting in this case.Is it good to use this approach
Description: Upload an object to an S3 bucket, triggering a Lambda event, returning the object key as a Stack Output.
Parameters:
Body:
Description: Stack to create s3 bucket and the lambda trigger
Type: String
Default: Test
BucketName:
Description: S3 Bucket name
Type: String
Default: image-process-bucket
Resources:
ImageProcessorExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: 'sts:AssumeRole'
Path: /
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
Policies:
- PolicyName: S3Policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 's3:PutObject'
- 'S3:DeleteObject'
Resource: !Sub "arn:aws:s3:::${BucketName}/*"
ImageProcessor:
Type: AWS::Lambda::Function
Properties:
Description: Prints the filename
Handler: imageProcessor.handler
Role: !GetAtt ImageProcessorExecutionRole.Arn
Code: .
Runtime: nodejs12.x
Environment:
Variables:
BucketName:
Ref: BucketName
Outputs:
ImageProcessingARN:
Description: ARN of the function
Value:
Fn::Sub: ${ImageProcessor.Arn}
Export:
Name: ImageProcessingARN
ImageProcessingName:
Description: Name of the function
Value: !Ref ImageProcessor
Export:
Name: ImageProcessingName
AWSTemplateFormatVersion: '2010-09-09'
Description: Test
Parameters:
BucketName:
Description: Name of the bucket
Type: String
Default: imageprocess-bucket
Resources:
Bucket:
Type: AWS::S3::Bucket
DependsOn: BucketPermission
Properties:
BucketName: !Ref BucketName
NotificationConfiguration:
LambdaConfigurations:
- Event: 's3:ObjectCreated:*'
Function:
Fn::ImportValue: ImageProcessingARN
BucketPermission:
Type: AWS::Lambda::Permission
Properties:
Action: 'lambda:InvokeFunction'
FunctionName:
Fn::ImportValue: ImageProcessingName
Principal: s3.amazonaws.com
SourceAccount: !Ref "AWS::AccountId"
SourceArn: !Sub "arn:aws:s3:::${BucketName}"
There is no such thing as the right approach, it almost always depends on your unique situation. Strictly speaking it is not required to separate the resources in different CloudFormation templates. A lambda function that changes a lot is also not a sufficient reason for separating the resources.
You seem to be separating the resources correctly in two different stacks. You just do not like that you have to delete the S3 bucket first because it makes debugging more difficult.
If my assumption is correct that you want to delete or update the Lambda CloudFormation stack frequently while not wanting to delete S3 bucket, then there are at least 2 solutions to this problem:
Put a Deletion Policy and an UpdateReplacePolicy on your S3 bucket. By adding these policies you can delete the CloudFormation stack, while keeping the S3 bucket. This will allow you to keep the S3 bucket and the Lambda function in one CloudFormation Template. To create the new stack again, remove the S3 Bucket Resource from the template and manually import the resource back into the CloudFormation stack later.
Use Queue Configuration as Notification Configuration. This is a good approach if you plan on separating the CloudFormation Template in a S3 bucket template and a Lambda function template (a decision based on frequency of change and dependencies between the two templates). Put an SQS queue in the S3 bucket template. Create the CloudFormation stack based on the S3 bucket template. Use the SQS arn (as a CloudFormation template configuration parameter or use the ImportValue intrinsic function) in the Lambda function stack and let SQS trigger the Lambda function. I think this is the best approach since you can now remove the Lambda function stack without having to delete the S3 bucket stack. This way you effectively reduce the coupling between the two CloudFormation stacks since you make the SQS in the S3 bucket stack unaware of potential Lambda function listeners.
4: I think that it is still possible to delete the S3 bucket CloudFormation stack first and delete the Image Processing Lambda CloudFormation stack second. Although I assume this is not something you typically want to do.
I'm trying to deploy a parent and nested stacks to AWS with cloudformation. The parent stack looks like this
AWSTemplateFormatVersion: '2010-09-09'
Parameters:
VPC:
Description: Choose which VPC the Lambda-functions should be deployed to
Type: AWS::EC2::VPC::Id
Default: vpc-sdjkfnsdjklfn
Subnets:
Description: Choose which subnets Lambda-functions should be deployed to
Type: CommaDelimitedList
Default: "subnet-sdoifno, subnet-sdofjnsdo"
SecurityGroup:
Description: Select the Security Group to use for the Lambda-functions
Type: AWS::EC2::SecurityGroup::Id
Default: sg-sdklfnsdkl
Role:
Description: Role for Lambda functions
Type: String
Default: arn:aws:iam::dlfksd:role/ssdfnsdo
Resources:
RestApi:
Type: AWS::ApiGateway::RestApi
Properties:
Name: "my-api"
Description: "SPP Lambda API"
Stack1:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: 'https://s3.amazonaws.com/bucket/template1.yml'
Parameters:
VPC: !Ref VPC
Subnets: !Join
- ','
- !Ref Subnets
SecurityGroup: !Ref SecurityGroup
Role: !Ref Role
RestApi: !Ref RestApi
ApiResourceParent: !GetAtt "RestApi.RootResourceId"
The child stack looks like this
AWSTemplateFormatVersion: '2010-09-09'
Parameters:
VPC:
Type: AWS::EC2::VPC::Id
Subnets:
Type: CommaDelimitedList
SecurityGroup:
Type: AWS::EC2::SecurityGroup::Id
Role:
Type: String
RestApi:
Type: AWS::ApiGateway::RestApi
ApiResourceParent:
Type: AWS::ApiGateway::Resource
Resources:
Fucntion:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket: bucket
S3Key: node_lambdas.zip
Handler: Function.handler
Role: !Ref Role
Runtime: nodejs6.10
Timeout: 300
VpcConfig:
SecurityGroupIds:
- !Ref SecurityGroup
SubnetIds: !Ref Subnets
#Policies: AWSLambdaDynamoDBExecutionRole
Permission:
Type: AWS::Lambda::Permission
Properties:
Action: lambda:InvokeFunction
FunctionName: !GetAtt "Function.Arn"
Principal: "apigateway.amazonaws.com"
SourceArn: !Sub "arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${RestApi}/*/*/*"
Resource:
Type: AWS::ApiGateway::Resource
Properties:
RestApiId: !Ref RestApi
ParentId: !Ref ApiResourceParent
PathPart: addadjustments
When I run aws cloudformation deploy --template-file parent-stack.yml --stack-name spp-lambda --region us-east-1 --capabilities CAPABILITY_IAM I get the following error
Embedded stack
arn:aws:cloudformation:us-east-1:771653148224:stack/spp-lambda-Stack1-97M9BLBUM3A5/4a454a50-c274-11e8-b49c-500c28903236
was not successfully created: Parameter validation failed: parameter
type AWS::ApiGateway::RestApi for parameter name RestApi does not
exist, parameter type AWS::ApiGateway::Resource for parameter name
ApiResourceParent does not exist
It doesn't complain about the parameters that are explicitly defined in the parent template. I want the parameters it is complaining about to be created and passed dynamically as I won't know the values before hand. What am I doing wrong?
Although some of the AWS resource type are supported as a cloudformation parameter type, it doesn't mean all resource type are supported.
You are trying to reference API gateway value as an AWS-specific parameter type, but it is not supported: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html#aws-specific-parameter-types
I believe using String as the type is sufficient.
I am having some trouble writing the cloudformation script for cloudwatch event trigger to kick off my lambda script, I know I can do it through the console but my requirement is that I need to provision everything in cloudformation. I followed the documentation and it still haven't worked for me and I kept getting the error:
Template contains errors.: Invalid template property or properties
[rPermissionForEventsToInvokeLambda, rLambdaScheduledRule]
can someone point out what is the issue with this part of my cloudformation script? I followed the document almost to the letter and still having error, even the example in the documentation have the same error when I tried to validate it. my cloudformation code is below, any help is appreciated!
rLambdaScheduledRule:
Type: AWS::Events::Rule
Properties:
ScheduleExpression: rate(1 hour)
State: ENABLED
Targets:
Ref:
Fn::ImportValue:
Fn::Sub: rUploadLambda
Action: lambda:InvokeFunction
rPermissionForEventsToInvokeLambda:
Type: AWS::Lambda::Permission
Properties:
FunctionName:
Ref:
Fn::ImportValue:
Fn::Sub: rUploadLambda
Action: lambda:InvokeFunction
Principal: events.amazonaws.com
SourceArn:
Fn::GetAtt:
- rLambdaScheduledRule
- Arn
1) You must export the Lambda function ARN in the CloudFormation template in which you create the lambda function. You need to pass the Lambda function ARN as input to the cloudwatch event (The AWS::Events::Rule Targets attribute requires a resource ARN).
See a sample script below:
Resources:
# Create Controlled Lambda Function
myLambda:
Type: "AWS::Lambda::Function"
Properties:
Code:
S3Bucket: "lambda-bucket"
S3Key: "myhandler.zip"
Description: "Lambda handler"
FunctionName: "myhandler"
Handler: myhandler.myhandler
MemorySize: 128
Role: "arn:aws:iam::xxxxxxxxxxx:role/myLambdaExecutionRole-NC7FA7TUSZ5B"
Runtime: "python3.6"
Timeout: 10
# Output of the cf template
Outputs:
myLambdaArn:
Description: Arn of the my_lambda_function
Value: !GetAtt myLambda.Arn
Export:
Name: !Sub "${AWS::StackName}-LambdaArn"
2) When you create the CloudWatch Event, you need to pass the ARN of the lambda function created in Step1 as the Target.
See a sample script below:
Resources:
# Cloudwatch event to trigger lambda periodically
rLambdaScheduledRule:
Type: "AWS::Events::Rule"
Properties:
Description: "CloudWatch Event to trigger lambda fn"
ScheduleExpression: "rate(1 hour)"
State: "ENABLED"
Targets:
-
Arn:
Fn::ImportValue:
!Sub "${NetworkStackName}-LambdaArn"
Id: "targetevent_v1"
PermissionForEventsToInvokeLambda:
Type: "AWS::Lambda::Permission"
Properties:
FunctionName:
Fn::ImportValue:
!Sub "${NetworkStackName}-LambdaArn"
Action: "lambda:InvokeFunction"
Principal: "events.amazonaws.com"
SourceArn:
Fn::GetAtt:
- rLambdaScheduledRule
- Arn
The value of ${NetworkStackName} should be the StackName from Step1.
Some of the issues you need to correct in your template:
correct the Targets property of resource rLambdaScheduledRule.
remove Action property from resource rLambdaScheduledRule.
correct the FunctionName property of resource rPermissionForEventsToInvokeLambda.
Keeping above sample as reference, correct your template and try again.