AWS Lambda Template doesn't see my own bucket - amazon-web-services

I was following this guide to deploy my Lambda. The solution used their own .template file to deploy it. However, I needed to make some code changes to that Lambda, so I uploaded my changed Lambda code to my own bucket, and changed the .template to work with my own bucket.
The original template
CustomResource:
S3Bucket: solutions
S3Key: >-
serverless-image-handler/v3.0.0/serverless-image-handler-custom-resource.zip
Name: serverless-image-handler-custom-resource
Handler: image_handler_custom_resource/cfn_custom_resource.lambda_handler
Description: >-
Serverless Image Handler: CloudFormation custom resource function
invoked during CloudFormation create, update, and delete stack
operations.
Runtime: python2.7
Timeout: '60'
MemorySize: '128'
My Customized Template (uses my bucket)
CustomResource:
S3Bucket: my-bucket
S3Key: >-
serverless-image-handler/serverless-image-handler-custom-resource.zip
Name: serverless-image-handler-custom-resource
Handler: image_handler_custom_resource/cfn_custom_resource.lambda_handler
Description: >-
Serverless Image Handler: CloudFormation custom resource function
invoked during CloudFormation create, update, and delete stack
operations.
Runtime: python2.7
Timeout: '60'
MemorySize: '128'
Of course, in my bucket I put the package under the correct path serverless-image-handler/serverless-image-handler-custom-resource.zip. However, when trying to deploy, I'm getting the following error.
Error occurred while GetObject. S3 Error Code: NoSuchBucket. S3 Error Message: The specified bucket does not exist (Service: AWSLambda; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: 6b666b56-dc62-11e8-acb0-8df0d82e071b)
It's like it can't "see" my own bucket, but it sees the bucket solutions. How can I make it see my bucket?
EDIT
Part of the template where the bucket is defined.
Parameters:
OriginS3Bucket:
Description: S3 bucket that will source your images.
Default: original-images-bucket-name
Type: String
ConstraintDescription: Must be a valid S3 Bucket.
MinLength: '1'
MaxLength: '64'
AllowedPattern: '[a-zA-Z][a-zA-Z0-9-.]*'
OriginS3BucketRegion:
Description: S3 bucket Region that will source your images.
Default: eu-central-1
Type: String
AllowedValues:
- ap-south-1
- ap-northeast-1
- ap-northeast-2
- ap-southeast-1
- ap-southeast-2
- ca-central-1
- eu-central-1
- eu-west-1
- eu-west-2
- eu-west-3
- sa-east-1
- us-east-1
- us-east-2
- us-west-1
- us-west-2
Part of the template that threw an error.
CustomResource:
Type: 'AWS::Lambda::Function'
DependsOn:
- CustomResourceLoggingPolicy
- CustomResourceDeployPolicy
Properties:
Code:
S3Bucket: !Join
- ''
- - !FindInMap
- Function
- CustomResource
- S3Bucket
- '-'
- !Ref 'AWS::Region'
S3Key: !FindInMap
- Function
- CustomResource
- S3Key
MemorySize: !FindInMap
- Function
- CustomResource
- MemorySize
Handler: !FindInMap
- Function
- CustomResource
- Handler
Role: !GetAtt
- CustomResourceRole
- Arn
Timeout: !FindInMap
- Function
- CustomResource
- Timeout
Runtime: !FindInMap
- Function
- CustomResource
- Runtime
Description: !FindInMap
- Function
- CustomResource
- Description
Environment:
Variables:
LOG_LEVEL: INFO

Related

Create a lambda from zip file and and an S3 bucket using one cloudFormation Template

How do I create an S3 bucket and a lambda in the same cloudormation Template?
The lambda has lot of lines of code so it can't be coded inline. Usually i upload the lambda zip to an S3 bucket and then specify the S3 key for the zip to create the lambda in my cloudFormation template. How can I do this without having to manually create an S3 bucket beforehand? Basically what I'm asking is, if there is a temporary storage option in AWS that can be used to upload files to without needing to create an S3 bucket manually.
I tried searching online but all the results point to uploading the zip file to an S3 bucket and using that in the cloudFormation template to create the lambda. That doesn't work here because the S3 bucket is also gets created in the same cloudFormation Template.
You could do something like below, which creates an S3 bucket, a lambda function, zips the inline code and creating an event notification which will trigger the lambda function if an object is uploaded into the specified bucket. I've also included a event notification, which you can ignore or remove it accordingly.
Make sure to replace your code snippet with mine within the lambda function.
As far as I know, either you have to create the S3 bucket, upload the file into it beforehand and use those details to point your zip file in the lambda function. Or else create the S3 bucket through the lambda first and then upload the file into it manually once the resources are provisioned.
In my lambda function, you can notice I have provided an incline code to zip, but you can still give the S3 bucket and key if you have the bucket already.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html
You can also check this where they have created an S3 object on the fly and have pointed to the bucket that was created. But I haven't personally tested this, so you may have test and see whether you can upload a zip file too.
AWSTemplateFormatVersion: 2010-09-09
Parameters:
LambdaFunctionName:
Type: String
MinLength: '1'
MaxLength: '64'
AllowedPattern: '[a-zA-Z][a-zA-Z0-9_-]*'
Description: The name of the Lambda function to be deployed
Default: convert_csv_to_parquet_v2
LambdaRoleName:
Type: String
MinLength: '1'
MaxLength: '64'
AllowedPattern: '[\w+=,.#-]+'
Description: The name of the IAM role used as the Lambda execution role
Default: Lambda-Role-CFNExample
LambdaPolicyName:
Type: String
MinLength: '1'
MaxLength: '128'
AllowedPattern: '[\w+=,.#-]+'
Default: Lambda-Policy-CFNExample
NotificationBucket:
Type: String
Description: S3 bucket that's used for the Lambda event notification
Resources:
ExampleS3:
Type: AWS::S3::Bucket
DependsOn: LambdaInvokePermission
Properties:
BucketName: !Ref NotificationBucket
NotificationConfiguration:
LambdaConfigurations:
- Event: s3:ObjectCreated:Put
Filter:
S3Key:
Rules:
- Name: suffix
Value: txt
Function: !GetAtt LambdaFunction.Arn
LambdaRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Ref LambdaRoleName
Description: An execution role for a Lambda function launched by CloudFormation
ManagedPolicyArns:
- !Ref LambdaPolicy
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action:
- 'sts:AssumeRole'
LambdaPolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
ManagedPolicyName: !Ref LambdaPolicyName
Description: Managed policy for a Lambda function launched by CloudFormation
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 'logs:CreateLogStream'
- 'logs:PutLogEvents'
Resource: !Join ['',['arn:', !Ref AWS::Partition, ':logs:', !Ref AWS::Region, ':', !Ref AWS::AccountId, ':log-group:/aws/lambda/', !Ref LambdaFunctionName, ':*']]
- Effect: Allow
Action:
- 'logs:CreateLogGroup'
Resource: !Sub 'arn:${AWS::Partition}:logs:${AWS::Region}:${AWS::AccountId}:*'
- Effect: Allow
Action:
- 's3:*'
Resource: '*'
LogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: !Join ['',['/aws/lambda/', !Ref LambdaFunctionName]]
RetentionInDays: 30
LambdaFunction:
Type: AWS::Lambda::Function
Properties:
Description: Read CSV files from a S3 location and converting them into Parquet
FunctionName: !Ref LambdaFunctionName
Handler: lambda_function.lambda_handler
MemorySize: 128
Runtime: python3.9
Role: !GetAtt 'LambdaRole.Arn'
Timeout: 60
Code:
ZipFile: |
# Imports
import pandas
from urllib.parse import unquote_plus
import boto3
import os
def lambda_handler(event, context):
print(f'event >> {event}')
s3 = boto3.client('s3', region_name='us-east-1')
for record in event['Records']:
key = unquote_plus(record['s3']['object']['key'])
print(f'key >> {key}')
bucket = unquote_plus(record['s3']['bucket']['name'])
print(f'bucket >> {bucket}')
get_file = s3.get_object(Bucket=bucket, Key=key)
get = get_file['Body']
print(f'get >> {get}')
df = pandas.DataFrame(get)
print('updating columns..')
df.columns = df.columns.astype(str)
print('saving file to s3 location...')
df.to_parquet(f's3://csvtoparquetconverted/{key}.parquet')
print('file converted to parquet')
LambdaInvokePermission:
Type: 'AWS::Lambda::Permission'
Properties:
FunctionName: !GetAtt LambdaFunction.Arn
Action: 'lambda:InvokeFunction'
Principal: s3.amazonaws.com
SourceAccount: !Ref 'AWS::AccountId'
SourceArn: !Sub 'arn:aws:s3:::${NotificationBucket}'
Outputs:
CLI:
Description: Use this command to invoke the Lambda function
Value: !Sub |
aws lambda invoke --function-name ${LambdaFunction} --payload '{"null": "null"}' lambda-output.txt --cli-binary-format raw-in-base64-out

"Error occurred while GetObject. S3 Error Code: PermanentRedirect. S3 Error Message: The bucket is in this region: us-east-1

I try to follow this workshop https://gitflow-codetools.workshop.aws/en/, every thing well but when I try to create the lambda usinging cloudformation I got an error:
Resource handler returned message: "Error occurred while GetObject. S3 Error Code:
PermanentRedirect. S3 Error Message: The bucket is in this region:
us-east-1. Please use this region to retry the request (Service: Lambda,
Status Code: 400, Request ID: xxxxxx-xxxxx-xxxx-xxxx-xxxxxxxxxx,
Extended Request ID: null)" (RequestToken: xxxxxx-xxxxx-xxxx-xxxx-xxxxxxxxxx, HandlerErrorCode: InvalidRequest)
I'm using eu-west-1 for this workshop, but I don't understand why the cloudformation create the bucket in us-east-1.
When I deploy the cloudformation in us-east-1 I don't get this error.
Any idea how should avoid this error ?
the template looks like this:
AWSTemplateFormatVersion: '2010-09-09'
Resources:
LambdaRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: 'sts:AssumeRole'
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/IAMFullAccess
- arn:aws:iam::aws:policy/AWSLambda_FullAccess
- arn:aws:iam::aws:policy/AWSCodeCommitReadOnly
- arn:aws:iam::aws:policy/AWSCodePipelineFullAccess
- arn:aws:iam::aws:policy/CloudWatchEventsFullAccess
- arn:aws:iam::aws:policy/AWSCloudFormationFullAccess
PipelineCreateLambdaFunction:
Type: 'AWS::Lambda::Function'
Properties:
FunctionName: 'gitflow-workshop-create-pipeline'
Description: 'Lambda Function to create pipelines on branch creation'
Code:
S3Bucket: 'aws-workshop-gitflow'
S3Key: 'pipeline-create.zip'
Handler: 'pipeline-create.lambda_handler'
Runtime: 'python3.7'
Role:
Fn::GetAtt:
- LambdaRole
- Arn
PipelineCreateLambdaPermission:
Type: 'AWS::Lambda::Permission'
DependsOn: PipelineCreateLambdaFunction
Properties:
Action: 'lambda:InvokeFunction'
Principal: "codecommit.amazonaws.com"
FunctionName: 'gitflow-workshop-create-pipeline'
PipelineDeleteLambdaFunction:
Type: 'AWS::Lambda::Function'
Properties:
FunctionName: 'gitflow-workshop-delete-pipeline'
Description: 'Lambda Function to delete pipelines on branch deletion'
Code:
S3Bucket: 'aws-workshop-gitflow'
S3Key: 'pipeline-delete.zip'
Handler: 'pipeline-delete.lambda_handler'
Runtime: 'python3.7'
Role:
Fn::GetAtt:
- LambdaRole
- Arn
PipelineDeleteLambdaPermission:
Type: 'AWS::Lambda::Permission'
DependsOn: PipelineDeleteLambdaFunction
Properties:
Action: 'lambda:InvokeFunction'
Principal: "codecommit.amazonaws.com"
FunctionName: 'gitflow-workshop-delete-pipeline'
First things first, Lambda and S3 need to be in the same region.
Secondly, it looks like you're not the bucket owner (you haven't created the bucket yourself by looking at the template).
This means, the bucket you're using to retrieve the Lambda source code from is (I suppose coming from the workshop), and they decided to create that bucket in the region us-east-1. Enforcing you to also deploy your stack in the region us-east-1 (if you want to follow the workshop).
But what if you really wanted to deploy this stack to eu-west-1?
That would mean you need to create a bucket in region eu-west-1 with and copy the objects from the workshop bucket into your newly created bucket and update your CloudFormation template to point and retrive the Lambda source code from your newly created bucket (note you might need to name the bucket differently as bucket names are globally shared).
I hope this is a bit clear.

Create bucket and lambda trigger in same serverless framework

I want to create an s3 bucket, and trigger a lambda function whenever some file is uploaded to 'uploads' folder in the bucket. I want to create those resources using serverless framework in aws.
I have defined my s3 bucket configuration in under 'provider.s3', and then I am trying to reference that bucket under functions.hello.events.bucket
However, I am getting following error when i run sls package
Serverless Error ----------------------------------------
MyS3Bucket - Bucket name must conform to pattern (?!^(\d{1,3}\.){3}\d{1,3}$)(^(([a-z0-9]|[a-z0-9][a-z0-9-]*[a-z0-9])\.)*([a-z0-9]|[a-z0-9][a-z0-9-]*[a-z0-9])$). Please check provider.s3.MyS3Bucket and/or s3 events of function "hello".
serverless.yml
service: some-service
frameworkVersion: '2'
useDotenv: true
provider:
name: aws
runtime: python3.8
lambdaHashingVersion: 20201221
s3:
MyS3Bucket:
bucketName: ${env:MY_BUCKET_NAME}
accessControl: Private
lifecycleConfiguration:
Rules:
- Id: ExpireRule
Status: Enabled
ExpirationInDays: '7'
package:
individually: true
functions:
hello:
name: my-lambda-function
handler: function.handler
memorySize: 128
timeout: 900
events:
- s3:
bucket: MyS3Bucket
event: s3:ObjectCreated:*
rules:
- prefix: uploads/
My next try was defining the s3 bucket under 'resources', and using the reference of the bucket in the lambda trigger. I am still getting the warning messages
Serverless: Configuration warning at 'functions.hello.events[0].s3.bucket': should be string
serverless.yml
service: some-service
frameworkVersion: '2'
useDotenv: true
provider:
name: aws
runtime: python3.8
lambdaHashingVersion: 20201221
package:
individually: true
functions:
hello:
name: my-lambda-function
handler: handler.handler
memorySize: 128
timeout: 900
events:
- s3:
bucket:
Ref: MyS3Bucket
event: s3:ObjectCreated:*
rules:
- prefix: uploads/
existing: true
resources:
Resources:
MyS3Bucket:
Type: AWS::S3::Bucket
Properties:
AccessControl: Private
BucketName: 'test.bucket'
OwnershipControls:
Rules:
- ObjectOwnership: ObjectWriter
LifecycleConfiguration:
Rules:
- Id: ExpireRule
Status: Enabled
ExpirationInDays: '7'
You should use your bucket name, not MyS3Bucket:
events:
- s3:
bucket: ${env:MY_BUCKET_NAME}
Create a custom s3 bucket name variable e.g.
custom:
bucket: foo-thumbnails
And use it events
events:
- s3:
bucket: ${self:custom.bucket}

AWS SAM Unable to call Rekognition and access S3 from Lambda

I am trying to call the detectText method from Rekognition framework and it failed to call S3 bucket. I am not sure how to give roles in SAM Template. Below is my SAM template
GetTextFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: gettextfn/
Handler: text.handler
Runtime: nodejs12.x
Timeout: 3
MemorySize: 128
Environment:
Variables:
imagebucket: !Ref s3bucket
Events:
TextApiEvent:
Type: HttpApi
Properties:
Path: /gettext
Method: get
ApiId: !Ref myapi
Looks like your lambda needs RekognitionDetectOnlyPolicy and also looks you miss the policy to read/write data from S3 bucket also. Have a look at below Policies: added after Environment:
Environment:
Variables:
imagebucket: !Ref s3bucket
Policies:
- S3ReadPolicy:
BucketName: !Ref s3bucket
- RekognitionDetectOnlyPolicy: {}
Events:
You can refer the complete list of AWS SAM policy templates here https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-policy-templates.html
Also have a look at a sample template here
https://github.com/rollendxavier/serverless_computing/blob/main/template.yaml

Trouble adding an s3 event trigger to my lambda function with SAM

I am trying to get my lambda to run when an image is added to a "folder" in an s3 bucket. Here is the template
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: 1. Creates the S# bucket that sotres the images from the camera.\n
2. Resizes the images when a new image shows up from a camera.\n
3. Adds a record of the image in the DB.
Globals:
Function:
Timeout: 10
Parameters:
DeploymentStage:
Type: String
Default: production
Resources:
CameraImagesBucket:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: !Sub
- com.wastack.camera.images.${stage}
- { stage: !Ref DeploymentStage }
CreateThumbnailFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: image_resize/
Handler: app.lambda_handler
Runtime: python3.8
Description: Creates a thumbnail of images in the camare_images bucket
Policies:
- S3ReadPolicy:
BucketName: !Sub
- com.wastack.camera.images.${stage}
- { stage: !Ref DeploymentStage }
- S3WritePolicy:
BucketName: !Sub
- com.wastack.camera.images.${stage}
- { stage: !Ref DeploymentStage }
Events:
CameraImageEvent:
Type: S3
Properties:
Bucket:
Ref: CameraImagesBucket
Events:
- 's3:ObjectCreated:*'
Filter:
S3Key:
Rules:
- Name: prefix
Value: camera_images
When I look at the lambda created on the AWS console, I do not see the trigger even in the lambda visualiser. The lambda doesn't event have the s3 read and write policies attached to it.
The s3 bucket and the lambda are created, but the policies and triggers that are supposed to connect them are not created.
I did not get any error when I run sam deploy
Question: why did it not attach the s3 trigger event or the s3 access policies to the lambda function?
Policies for s3 So the template is straight forward. If you place the full template in does it work. If that is also failing, check the permissions on what you're running SAM as. Also there's an open ticket on github, This appears to be your issue. See comments.