I have created a cloudformation template to configure a S3 bucket with an event notification that will call a lambda function. The lamba is triggered whenever a new object is created in the bucket.
The problem I have is when I delete the stack the bucket is also deleted. For debugging and testing purpose I had to delete the stack.
AWSTemplateFormatVersion: '2010-09-09'
Description: Upload an object to an S3 bucket, triggering a Lambda event, returning the object key as a Stack Output.
Parameters:
Body:
Description: Stack to create s3 bucket and the lambda trigger
Type: String
Default: Test
BucketName:
Description: S3 Bucket name
Type: String
Default: image-process-bucket
Resources:
ImageProcessorExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: 'sts:AssumeRole'
Path: /
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
Policies:
- PolicyName: S3Policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 's3:PutObject'
- 'S3:DeleteObject'
Resource: !Sub "arn:aws:s3:::${BucketName}/*"
ImageProcessor:
Type: AWS::Lambda::Function
Properties:
Description: Prints the filename
Handler: imageProcessor.handler
Role: !GetAtt ImageProcessorExecutionRole.Arn
Code: .
Runtime: nodejs12.x
Environment:
Variables:
BucketName:
Ref: BucketName
Bucket:
Type: AWS::S3::Bucket
DependsOn: BucketPermission
Properties:
BucketName: !Ref BucketName
NotificationConfiguration:
LambdaConfigurations:
- Event: 's3:ObjectCreated:*'
Function: !GetAtt ImageProcessor.Arn
BucketPermission:
Type: AWS::Lambda::Permission
Properties:
Action: 'lambda:InvokeFunction'
FunctionName: !Ref ImageProcessor
Principal: s3.amazonaws.com
SourceAccount: !Ref "AWS::AccountId"
SourceArn: !Sub "arn:aws:s3:::${BucketName}"
To resolve this, I separated the two resources on separate template using Outputs. Problem with this is that I cannot delete Lambda function stack because it is being referenced by the Bucket stack.
I want to know what is the right approach. Is it really required to separate these two resources. I believe lambda function is required to be changed frequently.
If yes what is the correct way to do it.
If not how should I handle the necessity to makes changes.
The approach using Outputs and Imports will always create the dependencies and will not allow to delete. This is a generic behavior in any resources. How do we deal with deleting in this case.Is it good to use this approach
Description: Upload an object to an S3 bucket, triggering a Lambda event, returning the object key as a Stack Output.
Parameters:
Body:
Description: Stack to create s3 bucket and the lambda trigger
Type: String
Default: Test
BucketName:
Description: S3 Bucket name
Type: String
Default: image-process-bucket
Resources:
ImageProcessorExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: 'sts:AssumeRole'
Path: /
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
Policies:
- PolicyName: S3Policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 's3:PutObject'
- 'S3:DeleteObject'
Resource: !Sub "arn:aws:s3:::${BucketName}/*"
ImageProcessor:
Type: AWS::Lambda::Function
Properties:
Description: Prints the filename
Handler: imageProcessor.handler
Role: !GetAtt ImageProcessorExecutionRole.Arn
Code: .
Runtime: nodejs12.x
Environment:
Variables:
BucketName:
Ref: BucketName
Outputs:
ImageProcessingARN:
Description: ARN of the function
Value:
Fn::Sub: ${ImageProcessor.Arn}
Export:
Name: ImageProcessingARN
ImageProcessingName:
Description: Name of the function
Value: !Ref ImageProcessor
Export:
Name: ImageProcessingName
AWSTemplateFormatVersion: '2010-09-09'
Description: Test
Parameters:
BucketName:
Description: Name of the bucket
Type: String
Default: imageprocess-bucket
Resources:
Bucket:
Type: AWS::S3::Bucket
DependsOn: BucketPermission
Properties:
BucketName: !Ref BucketName
NotificationConfiguration:
LambdaConfigurations:
- Event: 's3:ObjectCreated:*'
Function:
Fn::ImportValue: ImageProcessingARN
BucketPermission:
Type: AWS::Lambda::Permission
Properties:
Action: 'lambda:InvokeFunction'
FunctionName:
Fn::ImportValue: ImageProcessingName
Principal: s3.amazonaws.com
SourceAccount: !Ref "AWS::AccountId"
SourceArn: !Sub "arn:aws:s3:::${BucketName}"
There is no such thing as the right approach, it almost always depends on your unique situation. Strictly speaking it is not required to separate the resources in different CloudFormation templates. A lambda function that changes a lot is also not a sufficient reason for separating the resources.
You seem to be separating the resources correctly in two different stacks. You just do not like that you have to delete the S3 bucket first because it makes debugging more difficult.
If my assumption is correct that you want to delete or update the Lambda CloudFormation stack frequently while not wanting to delete S3 bucket, then there are at least 2 solutions to this problem:
Put a Deletion Policy and an UpdateReplacePolicy on your S3 bucket. By adding these policies you can delete the CloudFormation stack, while keeping the S3 bucket. This will allow you to keep the S3 bucket and the Lambda function in one CloudFormation Template. To create the new stack again, remove the S3 Bucket Resource from the template and manually import the resource back into the CloudFormation stack later.
Use Queue Configuration as Notification Configuration. This is a good approach if you plan on separating the CloudFormation Template in a S3 bucket template and a Lambda function template (a decision based on frequency of change and dependencies between the two templates). Put an SQS queue in the S3 bucket template. Create the CloudFormation stack based on the S3 bucket template. Use the SQS arn (as a CloudFormation template configuration parameter or use the ImportValue intrinsic function) in the Lambda function stack and let SQS trigger the Lambda function. I think this is the best approach since you can now remove the Lambda function stack without having to delete the S3 bucket stack. This way you effectively reduce the coupling between the two CloudFormation stacks since you make the SQS in the S3 bucket stack unaware of potential Lambda function listeners.
4: I think that it is still possible to delete the S3 bucket CloudFormation stack first and delete the Image Processing Lambda CloudFormation stack second. Although I assume this is not something you typically want to do.
Related
How do I create an S3 bucket and a lambda in the same cloudormation Template?
The lambda has lot of lines of code so it can't be coded inline. Usually i upload the lambda zip to an S3 bucket and then specify the S3 key for the zip to create the lambda in my cloudFormation template. How can I do this without having to manually create an S3 bucket beforehand? Basically what I'm asking is, if there is a temporary storage option in AWS that can be used to upload files to without needing to create an S3 bucket manually.
I tried searching online but all the results point to uploading the zip file to an S3 bucket and using that in the cloudFormation template to create the lambda. That doesn't work here because the S3 bucket is also gets created in the same cloudFormation Template.
You could do something like below, which creates an S3 bucket, a lambda function, zips the inline code and creating an event notification which will trigger the lambda function if an object is uploaded into the specified bucket. I've also included a event notification, which you can ignore or remove it accordingly.
Make sure to replace your code snippet with mine within the lambda function.
As far as I know, either you have to create the S3 bucket, upload the file into it beforehand and use those details to point your zip file in the lambda function. Or else create the S3 bucket through the lambda first and then upload the file into it manually once the resources are provisioned.
In my lambda function, you can notice I have provided an incline code to zip, but you can still give the S3 bucket and key if you have the bucket already.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html
You can also check this where they have created an S3 object on the fly and have pointed to the bucket that was created. But I haven't personally tested this, so you may have test and see whether you can upload a zip file too.
AWSTemplateFormatVersion: 2010-09-09
Parameters:
LambdaFunctionName:
Type: String
MinLength: '1'
MaxLength: '64'
AllowedPattern: '[a-zA-Z][a-zA-Z0-9_-]*'
Description: The name of the Lambda function to be deployed
Default: convert_csv_to_parquet_v2
LambdaRoleName:
Type: String
MinLength: '1'
MaxLength: '64'
AllowedPattern: '[\w+=,.#-]+'
Description: The name of the IAM role used as the Lambda execution role
Default: Lambda-Role-CFNExample
LambdaPolicyName:
Type: String
MinLength: '1'
MaxLength: '128'
AllowedPattern: '[\w+=,.#-]+'
Default: Lambda-Policy-CFNExample
NotificationBucket:
Type: String
Description: S3 bucket that's used for the Lambda event notification
Resources:
ExampleS3:
Type: AWS::S3::Bucket
DependsOn: LambdaInvokePermission
Properties:
BucketName: !Ref NotificationBucket
NotificationConfiguration:
LambdaConfigurations:
- Event: s3:ObjectCreated:Put
Filter:
S3Key:
Rules:
- Name: suffix
Value: txt
Function: !GetAtt LambdaFunction.Arn
LambdaRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Ref LambdaRoleName
Description: An execution role for a Lambda function launched by CloudFormation
ManagedPolicyArns:
- !Ref LambdaPolicy
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action:
- 'sts:AssumeRole'
LambdaPolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
ManagedPolicyName: !Ref LambdaPolicyName
Description: Managed policy for a Lambda function launched by CloudFormation
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 'logs:CreateLogStream'
- 'logs:PutLogEvents'
Resource: !Join ['',['arn:', !Ref AWS::Partition, ':logs:', !Ref AWS::Region, ':', !Ref AWS::AccountId, ':log-group:/aws/lambda/', !Ref LambdaFunctionName, ':*']]
- Effect: Allow
Action:
- 'logs:CreateLogGroup'
Resource: !Sub 'arn:${AWS::Partition}:logs:${AWS::Region}:${AWS::AccountId}:*'
- Effect: Allow
Action:
- 's3:*'
Resource: '*'
LogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: !Join ['',['/aws/lambda/', !Ref LambdaFunctionName]]
RetentionInDays: 30
LambdaFunction:
Type: AWS::Lambda::Function
Properties:
Description: Read CSV files from a S3 location and converting them into Parquet
FunctionName: !Ref LambdaFunctionName
Handler: lambda_function.lambda_handler
MemorySize: 128
Runtime: python3.9
Role: !GetAtt 'LambdaRole.Arn'
Timeout: 60
Code:
ZipFile: |
# Imports
import pandas
from urllib.parse import unquote_plus
import boto3
import os
def lambda_handler(event, context):
print(f'event >> {event}')
s3 = boto3.client('s3', region_name='us-east-1')
for record in event['Records']:
key = unquote_plus(record['s3']['object']['key'])
print(f'key >> {key}')
bucket = unquote_plus(record['s3']['bucket']['name'])
print(f'bucket >> {bucket}')
get_file = s3.get_object(Bucket=bucket, Key=key)
get = get_file['Body']
print(f'get >> {get}')
df = pandas.DataFrame(get)
print('updating columns..')
df.columns = df.columns.astype(str)
print('saving file to s3 location...')
df.to_parquet(f's3://csvtoparquetconverted/{key}.parquet')
print('file converted to parquet')
LambdaInvokePermission:
Type: 'AWS::Lambda::Permission'
Properties:
FunctionName: !GetAtt LambdaFunction.Arn
Action: 'lambda:InvokeFunction'
Principal: s3.amazonaws.com
SourceAccount: !Ref 'AWS::AccountId'
SourceArn: !Sub 'arn:aws:s3:::${NotificationBucket}'
Outputs:
CLI:
Description: Use this command to invoke the Lambda function
Value: !Sub |
aws lambda invoke --function-name ${LambdaFunction} --payload '{"null": "null"}' lambda-output.txt --cli-binary-format raw-in-base64-out
I am trying to delete an aws lambda application, but the delete continuously fails with the following message:
"Role arn:aws:iam::070781698397:role/CloudFormationRole-demo-app-us-east-1 is invalid or cannot be assumed"
I previously deleted the above mentioned cloudformation role and the lambda function "helloFromLambda" (seen below). I have attempted to recreate the cloudformation role, hoping this would facilitate the deletion process (suggested in another post).
Here is my lambda function template:
AWSTemplateFormatVersion: 2010-09-09
Description: Start from scratch starter project
Parameters:
AppId:
Type: String
Resources:
helloFromLambdaFunction:
Type: 'AWS::Lambda::Function'
Properties:
Code:
S3Bucket: aws-us-east-1-070781698397-demo-app-pipe
S3Key: b42fa7cf7096fa092274e9524d6cadda
Description: A Lambda function that returns a static string.
Handler: src/handlers/hello-from-lambda.helloFromLambdaHandler
MemorySize: 128
Role: !GetAtt
- helloFromLambdaFunctionRole
- Arn
Runtime: nodejs16.x
Timeout: 60
Tags:
- Key: 'lambda:createdBy'
Value: SAM
helloFromLambdaFunctionRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Action:
- 'sts:AssumeRole'
Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'
PermissionsBoundary: !Sub >-
arn:${AWS::Partition}:iam::${AWS::AccountId}:policy/${AppId}-${AWS::Region}-PermissionsBoundary
Tags:
- Key: 'lambda:createdBy'
Value: SAM
I tried deleting through aws management console. I have tried deleting through the aws cli.
Since a few days, AWS Lambdas can be exposed as web services directly without an API Gateway.
This works fine when setting up through the UI console, but I can’t seem to get it done with Cloudformation, because the resource policy is not attached with AuthType: NONE. And without the policy, I get "message": "Forbidden" from AWS when trying to access the Lambda through the function url.
My Lambda is the following:
exports.handler = async event => {
return {
statusCode: 200,
body: JSON.stringify("Hello World")
}
}
and here’s the CFN template:
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
stackName:
Type: String
lambdaFile:
Type: String
lambdaBucket:
Type: String
Resources:
lambdaRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Action:
- "sts:AssumeRole"
Effect: "Allow"
Principal:
Service:
- "lambda.amazonaws.com"
Policies:
- PolicyDocument:
Version: "2012-10-17"
Statement:
- Action:
- "logs:CreateLogGroup"
- "logs:CreateLogStream"
- "logs:PutLogEvents"
Effect: "Allow"
Resource:
- !Sub "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/lambda/${stackName}:*"
PolicyName: "lambda"
runtimeLambdaFunction:
Type: "AWS::Lambda::Function"
Properties:
Code:
S3Bucket: !Ref lambdaBucket
S3Key: !Ref lambdaFile
Environment:
Variables:
NODE_ENV: production
FunctionName: !Sub "${stackName}-runtime"
Handler: runtime.handler
MemorySize: 128
Role: !GetAtt lambdaRole.Arn
Runtime: "nodejs14.x"
Timeout: 5
lambdaLogGroup:
Type: "AWS::Logs::LogGroup"
Properties:
LogGroupName: !Sub "/aws/${stackName}"
RetentionInDays: 30
runtimeLambdaUrl:
Type: "AWS::Lambda::Url"
Properties:
AuthType: NONE
TargetFunctionArn: !Ref runtimeLambdaFunction
Outputs:
runtimeLambdaUrl:
Value: !GetAtt runtimeLambdaUrl.FunctionUrl
The interesting thing is that I can add the policy through the UI console, and then it works.
Here’s the initial config screen for the function URL right after CFN deployment:
This is what I see when pushing the “Edit” button:
After clicking “Save”, I get the following (note the blue box):
Also, when I go into “Edit” mode again, I now see the following:
After that, the function can be accessed via its URL.
I tried to add the policy into my CFN stack, either standalone as AWS::IAM::Policy, but then it is not a resource-based policy or as an additional action on the lambdaRole. But in either case, I can’t add a Principal and the policy doesn’t have an effect.
Does anybody know how I can make a pure Clouformation deployment for a Lambda with a function URL? Or is this a bug in Cloudformation and/or Lambda?
Your template is missing AWS::Lambda::Permission, thus its does not work. You already know what the permissions should be based on AWS console inspection, so you have to recreate those permissions using AWS::Lambda::Permission. This allows you to specify FunctionUrlAuthType.
I try to follow this workshop https://gitflow-codetools.workshop.aws/en/, every thing well but when I try to create the lambda usinging cloudformation I got an error:
Resource handler returned message: "Error occurred while GetObject. S3 Error Code:
PermanentRedirect. S3 Error Message: The bucket is in this region:
us-east-1. Please use this region to retry the request (Service: Lambda,
Status Code: 400, Request ID: xxxxxx-xxxxx-xxxx-xxxx-xxxxxxxxxx,
Extended Request ID: null)" (RequestToken: xxxxxx-xxxxx-xxxx-xxxx-xxxxxxxxxx, HandlerErrorCode: InvalidRequest)
I'm using eu-west-1 for this workshop, but I don't understand why the cloudformation create the bucket in us-east-1.
When I deploy the cloudformation in us-east-1 I don't get this error.
Any idea how should avoid this error ?
the template looks like this:
AWSTemplateFormatVersion: '2010-09-09'
Resources:
LambdaRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: 'sts:AssumeRole'
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/IAMFullAccess
- arn:aws:iam::aws:policy/AWSLambda_FullAccess
- arn:aws:iam::aws:policy/AWSCodeCommitReadOnly
- arn:aws:iam::aws:policy/AWSCodePipelineFullAccess
- arn:aws:iam::aws:policy/CloudWatchEventsFullAccess
- arn:aws:iam::aws:policy/AWSCloudFormationFullAccess
PipelineCreateLambdaFunction:
Type: 'AWS::Lambda::Function'
Properties:
FunctionName: 'gitflow-workshop-create-pipeline'
Description: 'Lambda Function to create pipelines on branch creation'
Code:
S3Bucket: 'aws-workshop-gitflow'
S3Key: 'pipeline-create.zip'
Handler: 'pipeline-create.lambda_handler'
Runtime: 'python3.7'
Role:
Fn::GetAtt:
- LambdaRole
- Arn
PipelineCreateLambdaPermission:
Type: 'AWS::Lambda::Permission'
DependsOn: PipelineCreateLambdaFunction
Properties:
Action: 'lambda:InvokeFunction'
Principal: "codecommit.amazonaws.com"
FunctionName: 'gitflow-workshop-create-pipeline'
PipelineDeleteLambdaFunction:
Type: 'AWS::Lambda::Function'
Properties:
FunctionName: 'gitflow-workshop-delete-pipeline'
Description: 'Lambda Function to delete pipelines on branch deletion'
Code:
S3Bucket: 'aws-workshop-gitflow'
S3Key: 'pipeline-delete.zip'
Handler: 'pipeline-delete.lambda_handler'
Runtime: 'python3.7'
Role:
Fn::GetAtt:
- LambdaRole
- Arn
PipelineDeleteLambdaPermission:
Type: 'AWS::Lambda::Permission'
DependsOn: PipelineDeleteLambdaFunction
Properties:
Action: 'lambda:InvokeFunction'
Principal: "codecommit.amazonaws.com"
FunctionName: 'gitflow-workshop-delete-pipeline'
First things first, Lambda and S3 need to be in the same region.
Secondly, it looks like you're not the bucket owner (you haven't created the bucket yourself by looking at the template).
This means, the bucket you're using to retrieve the Lambda source code from is (I suppose coming from the workshop), and they decided to create that bucket in the region us-east-1. Enforcing you to also deploy your stack in the region us-east-1 (if you want to follow the workshop).
But what if you really wanted to deploy this stack to eu-west-1?
That would mean you need to create a bucket in region eu-west-1 with and copy the objects from the workshop bucket into your newly created bucket and update your CloudFormation template to point and retrive the Lambda source code from your newly created bucket (note you might need to name the bucket differently as bucket names are globally shared).
I hope this is a bit clear.
This is my cft for lambda, I upload the jar file to s3 and then upload to lambda through s3, I completed the LambdaRole and LambdaFunction section, in the permission section, what should be the SourceArn? I went through the lambda official doc but didn't find any example.
Also, can anyone take a look to see if this cft is correct or not? Thanks!
ConfigurationLambdaRole:
Type: "AWS::IAM::Role"
Properties:
RoleName: 'configuration-sqs-lambda'
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
- events.amazonaws.com
- s3.amazonaws.com
Action:
- sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonSQSFullAccess
- arn:aws:iam::aws:policy/CloudWatchLogsFullAccess
ConfigurationLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Description: 'configuration service with lambda'
FunctionName: 'configuration-lambda'
Handler: com.lambda.handler.EventHandler::handleRequest
Runtime: Java 11
MemorySize: 128
Timeout: 120
Code:
S3Bucket: configurationlambda
S3Key: lambda-service-1.0.0-SNAPSHOT.jar
Role: !GetAtt ConfiguratioLambdaRole.Arn
ConfigurationLambdaInvokePermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName:
Fn::GetAtt:
- ConfigurationLambdaFunction
- Arn
Action: 'lambda:InvokeFunction'
Principal: s3.amazonaws.com
SourceArn: ''
You are creating a Role to run your lambda, a lambda function, and permissions for something to invoke that lambda. The SourceArn is the thing that will invoke the lambda. So in your case it sounds like you want an S3 bucket to invoke the lambda, so the SourceArn would be the ARN of the S3 bucket.
This tutorial relates to what you are doing--specifically step 8 under the "Create the Lambda function" section.
Your CF template generally looks correct. The only thing I see that will be assuming this role is lambda.amazonaws.com, so the role may not need to list the following in the AssumeRolePolicyDocument section:
- events.amazonaws.com
- s3.amazonaws.com
Also see https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-permission.html