Trouble adding an s3 event trigger to my lambda function with SAM - amazon-web-services

I am trying to get my lambda to run when an image is added to a "folder" in an s3 bucket. Here is the template
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: 1. Creates the S# bucket that sotres the images from the camera.\n
2. Resizes the images when a new image shows up from a camera.\n
3. Adds a record of the image in the DB.
Globals:
Function:
Timeout: 10
Parameters:
DeploymentStage:
Type: String
Default: production
Resources:
CameraImagesBucket:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: !Sub
- com.wastack.camera.images.${stage}
- { stage: !Ref DeploymentStage }
CreateThumbnailFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: image_resize/
Handler: app.lambda_handler
Runtime: python3.8
Description: Creates a thumbnail of images in the camare_images bucket
Policies:
- S3ReadPolicy:
BucketName: !Sub
- com.wastack.camera.images.${stage}
- { stage: !Ref DeploymentStage }
- S3WritePolicy:
BucketName: !Sub
- com.wastack.camera.images.${stage}
- { stage: !Ref DeploymentStage }
Events:
CameraImageEvent:
Type: S3
Properties:
Bucket:
Ref: CameraImagesBucket
Events:
- 's3:ObjectCreated:*'
Filter:
S3Key:
Rules:
- Name: prefix
Value: camera_images
When I look at the lambda created on the AWS console, I do not see the trigger even in the lambda visualiser. The lambda doesn't event have the s3 read and write policies attached to it.
The s3 bucket and the lambda are created, but the policies and triggers that are supposed to connect them are not created.
I did not get any error when I run sam deploy
Question: why did it not attach the s3 trigger event or the s3 access policies to the lambda function?

Policies for s3 So the template is straight forward. If you place the full template in does it work. If that is also failing, check the permissions on what you're running SAM as. Also there's an open ticket on github, This appears to be your issue. See comments.

Related

"Error occurred while GetObject. S3 Error Code: PermanentRedirect. S3 Error Message: The bucket is in this region: us-east-1

I try to follow this workshop https://gitflow-codetools.workshop.aws/en/, every thing well but when I try to create the lambda usinging cloudformation I got an error:
Resource handler returned message: "Error occurred while GetObject. S3 Error Code:
PermanentRedirect. S3 Error Message: The bucket is in this region:
us-east-1. Please use this region to retry the request (Service: Lambda,
Status Code: 400, Request ID: xxxxxx-xxxxx-xxxx-xxxx-xxxxxxxxxx,
Extended Request ID: null)" (RequestToken: xxxxxx-xxxxx-xxxx-xxxx-xxxxxxxxxx, HandlerErrorCode: InvalidRequest)
I'm using eu-west-1 for this workshop, but I don't understand why the cloudformation create the bucket in us-east-1.
When I deploy the cloudformation in us-east-1 I don't get this error.
Any idea how should avoid this error ?
the template looks like this:
AWSTemplateFormatVersion: '2010-09-09'
Resources:
LambdaRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: 'sts:AssumeRole'
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/IAMFullAccess
- arn:aws:iam::aws:policy/AWSLambda_FullAccess
- arn:aws:iam::aws:policy/AWSCodeCommitReadOnly
- arn:aws:iam::aws:policy/AWSCodePipelineFullAccess
- arn:aws:iam::aws:policy/CloudWatchEventsFullAccess
- arn:aws:iam::aws:policy/AWSCloudFormationFullAccess
PipelineCreateLambdaFunction:
Type: 'AWS::Lambda::Function'
Properties:
FunctionName: 'gitflow-workshop-create-pipeline'
Description: 'Lambda Function to create pipelines on branch creation'
Code:
S3Bucket: 'aws-workshop-gitflow'
S3Key: 'pipeline-create.zip'
Handler: 'pipeline-create.lambda_handler'
Runtime: 'python3.7'
Role:
Fn::GetAtt:
- LambdaRole
- Arn
PipelineCreateLambdaPermission:
Type: 'AWS::Lambda::Permission'
DependsOn: PipelineCreateLambdaFunction
Properties:
Action: 'lambda:InvokeFunction'
Principal: "codecommit.amazonaws.com"
FunctionName: 'gitflow-workshop-create-pipeline'
PipelineDeleteLambdaFunction:
Type: 'AWS::Lambda::Function'
Properties:
FunctionName: 'gitflow-workshop-delete-pipeline'
Description: 'Lambda Function to delete pipelines on branch deletion'
Code:
S3Bucket: 'aws-workshop-gitflow'
S3Key: 'pipeline-delete.zip'
Handler: 'pipeline-delete.lambda_handler'
Runtime: 'python3.7'
Role:
Fn::GetAtt:
- LambdaRole
- Arn
PipelineDeleteLambdaPermission:
Type: 'AWS::Lambda::Permission'
DependsOn: PipelineDeleteLambdaFunction
Properties:
Action: 'lambda:InvokeFunction'
Principal: "codecommit.amazonaws.com"
FunctionName: 'gitflow-workshop-delete-pipeline'
First things first, Lambda and S3 need to be in the same region.
Secondly, it looks like you're not the bucket owner (you haven't created the bucket yourself by looking at the template).
This means, the bucket you're using to retrieve the Lambda source code from is (I suppose coming from the workshop), and they decided to create that bucket in the region us-east-1. Enforcing you to also deploy your stack in the region us-east-1 (if you want to follow the workshop).
But what if you really wanted to deploy this stack to eu-west-1?
That would mean you need to create a bucket in region eu-west-1 with and copy the objects from the workshop bucket into your newly created bucket and update your CloudFormation template to point and retrive the Lambda source code from your newly created bucket (note you might need to name the bucket differently as bucket names are globally shared).
I hope this is a bit clear.

How to dynamically pass codeUri in SAM template

I am trying to deploy lambda having a zip(contains jar file). Now if the static value of the artifact in CodeUri is provided, it works fine but the problem is that the artifact is not static in nature i.e the version of the jar file (along with its name ex: abc-<1.x.x>-prod.jar) will change whenever their is new build.
So, I want to pass the artifact name in CodeUri as dynamic value rather than static value.
I had tried splitting Bucket, Key & pass the value as parameter but it fails saying NoSuchKey while deployment.
Edit: Adding Sample Template
Transform: AWS::Serverless-2016-10-31
Description: engine-service
Parameters:
Environment:
Type: String
Default: ""
SecurityGroupIds:
Type: String
Default: ""
SubnetIds1:
Type: String
Default: ""
SubnetIds2:
Type: String
Default: ""
DBSubnetGroupName:
Type: String
Default: ""
RDSSecret:
Type: String
Default: ""
RDSInstance:
Type: String
Default: ""
API:
Type: String
Default: ""
Globals:
Function:
Timeout: 120
Resources:
TranslationEngineLambda:
Type: AWS::Serverless::Function
Properties:
FunctionName: !Sub "translation-engine-service-${Environment}"
CodeUri: target/abc-**1.0.0-SNAPSHOT**-prod.jar
Handler: com.abc.Main
Runtime: java11
MemorySize: 1024
Environment:
Variables:
BUCKET_NAME: "abc-dummy"
DB_SECRET: "abc-dummy"
FUNCTION_NAME: TranslateFunction
SPRING_PROFILES_ACTIVE: db
TEXT_EXTRACT_LAMBDA: !Sub arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:text-extract-service-${Environment}
TRANSLATE_OPTION: AWS
VpcConfig:
SecurityGroupIds:
- !Ref SecurityGroupIds
SubnetIds:
- !Ref SubnetIds1
- !Ref SubnetIds2
Policies:
- AWSLambda_FullAccess
- AmazonEC2FullAccess
- SecretsManagerReadWrite
- AmazonS3ReadOnlyAccess
- AmazonRDSFullAccess
TranslationEngineLambdaInvoke:
Type: "AWS::Lambda::Permission"
Properties:
Action: "lambda:InvokeFunction"
FunctionName: !GetAtt "TranslationEngineLambda.Arn"
Principal: "apigateway.amazonaws.com"
SourceArn: !Join ['', ['arn:aws:execute-api:MyRegion:MyAccountNumber:', Fn::ImportValue: !Ref API, '/*/POST/language-translator/v1/translate']]
Outputs:
TranslationEngineLambda:
Description: "Hello World Lambda Function ARN"
Value: !GetAtt TranslationEngineLambda.Arn
TranslationEngineLambdaIamRole:
Description: "Implicit IAM Role created for Hello World function"
Value: !GetAtt TranslationEngineLambdaRole.Arn`
Your question encompasses a few things.
First, if you're using CodeUri as you're doing in the template with a relative URL, AWS SAM will use that path to search from the directory in which the template resides to find the required files. If you're using Bucket/Key, AWS SAM will look in S3 to the specified Bucket and search for the Key. This is of course an entirely different way of working and assumes that you've already uploaded the artefact to that location yourself. You've presumably not done this, which results in the NoSuchKey error.
One of the more useful things about AWS SAM is that you can not only use it to deploy your code, but you can also use it to build your artefacts themselves. In that case, you have to point your CodeUri to the root of the folder in which your Lambda function code resides. AWS SAM will then - in the build step - create the necessary artefact (be it a jar of a zip). During the deployment, it will upload those artefacts to S3, update the CodeUris to reflect this and deploy the CloudFormation stack.
I don't think you can use CloudFormation parameters (with !Sub, !Join or similar) when using a relative CodeUri URL since the parameters are only interpreted in the cloud, and not during the AWS SAM build or package steps. So if you do not want to rely on AWS SAM to build your artefacts, you're probably better of also uploading them yourself.

AWS SAM Unable to call Rekognition and access S3 from Lambda

I am trying to call the detectText method from Rekognition framework and it failed to call S3 bucket. I am not sure how to give roles in SAM Template. Below is my SAM template
GetTextFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: gettextfn/
Handler: text.handler
Runtime: nodejs12.x
Timeout: 3
MemorySize: 128
Environment:
Variables:
imagebucket: !Ref s3bucket
Events:
TextApiEvent:
Type: HttpApi
Properties:
Path: /gettext
Method: get
ApiId: !Ref myapi
Looks like your lambda needs RekognitionDetectOnlyPolicy and also looks you miss the policy to read/write data from S3 bucket also. Have a look at below Policies: added after Environment:
Environment:
Variables:
imagebucket: !Ref s3bucket
Policies:
- S3ReadPolicy:
BucketName: !Ref s3bucket
- RekognitionDetectOnlyPolicy: {}
Events:
You can refer the complete list of AWS SAM policy templates here https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-policy-templates.html
Also have a look at a sample template here
https://github.com/rollendxavier/serverless_computing/blob/main/template.yaml

Instead of referring an existing AWS S3 bucket, Cloud Formation is trying to create the bucket

I'm stuck in a weird issue. I have created an AWS S3 bucket using following cloud formation template:-
AWSTemplateFormatVersion: '2010-09-09'
Metadata:
License: Unlicensed
Description: >
This template creates a global unique S3 bucket in a specific region which is unique.
The bucket name is formed by the environment, account id and region
Parameters:
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html
Environment:
Description: This paramenter will accept the environment details from the user
Type: String
Default: sbx
AllowedValues:
- sbx
- dev
- qa
- e2e
- prod
ConstraintDescription: Invalid environment. Please select one of the given environments only
Resources:
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket.html
MyS3Bucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
BucketName: !Sub 'global-bucket-${Environment}-${AWS::Region}-${AWS::AccountId}' # https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/pseudo-parameter-reference.html
AccessControl: Private
LoggingConfiguration:
DestinationBucketName: !Ref 'LoggingBucket'
LogFilePrefix: 'access-logs'
Tags:
- Key: name
Value: globalbucket
- Key: department
Value: engineering
LoggingBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
BucketName: !Sub 'global-loggings-${Environment}-${AWS::Region}-${AWS::AccountId}'
AccessControl: LogDeliveryWrite
Outputs:
GlobalS3Bucket:
Description: A private S3 bucket with deletion policy as retain and logging configuration
Value: !Ref MyS3Bucket
Export:
Name: global-bucket
If you note in the template above then I'm exporting this S3 bucket in the Outputs section by the name called global-bucket.
Now, my intention is to refer to this existing bucket going forward in my AWS account whenever any new resource like Lambda, etc wants an S3 bucket. Here is an example using AWS SAM (Serverless Application Model), I'm trying to create an AWS Lambda and trying to refer to this existing S3 bucket using property !ImportValue and the export name as global-bucket as shown below:-
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
hellolambda
Sample SAM Template for hellolambda
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 3
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: hello-world/
Handler: app.lambdaHandler
Runtime: nodejs12.x
Events:
HelloLambdaEvent:
Type: S3
Properties:
Bucket: !Ref SrcBucket
Events: s3:ObjectCreated:*
SrcBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !ImportValue global-bucket
Now, the problem is when I execute the command like sam build and then sam deploy --guided and select the same region (where my previous CloudFormation stack output is present) then I get the following error:-
global-bucket-sbx-ap-southeast-1-088853283839 already exists in stack arn:aws:cloudformation:ap-southeast-1:088853283839:stack/my-s3-global-bucket/aabd20e0-f57d-11ea-80bf-06f1487f6a64
The screenshot below:-
The problem is AWS CloudFormation is trying to create the S3 bucket rather than referring to the existing one.
But, if I try to update this SAM template like and then execute sam deploy, I get the following error:-
Waiting for changeset to be created..
Error: Failed to create changeset for the stack: my-lambda-stack, ex: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Number of errors found: 1. Resource with id [HelloWorldFunction] is invalid. Event with id [HelloLambdaEvent] is invalid. S3 events must reference an S3 bucket in the same template.
I'm blocked by both ends. I would really appreciate it if someone can assist to guide me writing the SAM template correctly in my Lambda so that I can refer the existing bucket properly instead of creating the new one.
Thank you
Any items listed under the Resources section refer to the resources the stack is responsible for maintaining.
When you list SrcBucket you are asking for CloudFormation to create a new S3 bucket with the name being the value of !ImportValue global-bucket which is the name of an S3 bucket you have already created.
Assuming that this is the bucket name you can simply reference it in your template as shown below.
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
hellolambda
Sample SAM Template for hellolambda
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 3
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: hello-world/
Handler: app.lambdaHandler
Runtime: nodejs12.x
Events:
HelloLambdaEvent:
Type: S3
Properties:
Bucket: !ImportValue global-bucket
Events: s3:ObjectCreated:*

AWS Lambda Template doesn't see my own bucket

I was following this guide to deploy my Lambda. The solution used their own .template file to deploy it. However, I needed to make some code changes to that Lambda, so I uploaded my changed Lambda code to my own bucket, and changed the .template to work with my own bucket.
The original template
CustomResource:
S3Bucket: solutions
S3Key: >-
serverless-image-handler/v3.0.0/serverless-image-handler-custom-resource.zip
Name: serverless-image-handler-custom-resource
Handler: image_handler_custom_resource/cfn_custom_resource.lambda_handler
Description: >-
Serverless Image Handler: CloudFormation custom resource function
invoked during CloudFormation create, update, and delete stack
operations.
Runtime: python2.7
Timeout: '60'
MemorySize: '128'
My Customized Template (uses my bucket)
CustomResource:
S3Bucket: my-bucket
S3Key: >-
serverless-image-handler/serverless-image-handler-custom-resource.zip
Name: serverless-image-handler-custom-resource
Handler: image_handler_custom_resource/cfn_custom_resource.lambda_handler
Description: >-
Serverless Image Handler: CloudFormation custom resource function
invoked during CloudFormation create, update, and delete stack
operations.
Runtime: python2.7
Timeout: '60'
MemorySize: '128'
Of course, in my bucket I put the package under the correct path serverless-image-handler/serverless-image-handler-custom-resource.zip. However, when trying to deploy, I'm getting the following error.
Error occurred while GetObject. S3 Error Code: NoSuchBucket. S3 Error Message: The specified bucket does not exist (Service: AWSLambda; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: 6b666b56-dc62-11e8-acb0-8df0d82e071b)
It's like it can't "see" my own bucket, but it sees the bucket solutions. How can I make it see my bucket?
EDIT
Part of the template where the bucket is defined.
Parameters:
OriginS3Bucket:
Description: S3 bucket that will source your images.
Default: original-images-bucket-name
Type: String
ConstraintDescription: Must be a valid S3 Bucket.
MinLength: '1'
MaxLength: '64'
AllowedPattern: '[a-zA-Z][a-zA-Z0-9-.]*'
OriginS3BucketRegion:
Description: S3 bucket Region that will source your images.
Default: eu-central-1
Type: String
AllowedValues:
- ap-south-1
- ap-northeast-1
- ap-northeast-2
- ap-southeast-1
- ap-southeast-2
- ca-central-1
- eu-central-1
- eu-west-1
- eu-west-2
- eu-west-3
- sa-east-1
- us-east-1
- us-east-2
- us-west-1
- us-west-2
Part of the template that threw an error.
CustomResource:
Type: 'AWS::Lambda::Function'
DependsOn:
- CustomResourceLoggingPolicy
- CustomResourceDeployPolicy
Properties:
Code:
S3Bucket: !Join
- ''
- - !FindInMap
- Function
- CustomResource
- S3Bucket
- '-'
- !Ref 'AWS::Region'
S3Key: !FindInMap
- Function
- CustomResource
- S3Key
MemorySize: !FindInMap
- Function
- CustomResource
- MemorySize
Handler: !FindInMap
- Function
- CustomResource
- Handler
Role: !GetAtt
- CustomResourceRole
- Arn
Timeout: !FindInMap
- Function
- CustomResource
- Timeout
Runtime: !FindInMap
- Function
- CustomResource
- Runtime
Description: !FindInMap
- Function
- CustomResource
- Description
Environment:
Variables:
LOG_LEVEL: INFO