I am following a serverless tutorial and I am trying to send a notification every time an image is uploaded to the s3 bucket. I've created a sendUploadNotifications function under functions and instead of adding an event to the function I've set up the "NotificationsConfiguration" under the AttachmentsBucket, as well as created a new sendUploadNotificationsPermission Resource under resources.
But when I deploy the app I get the following error when I try to deploy my serverless app:
Error: The CloudFormation template is invalid: Template error: instance of Fn::GetAtt references undefined resource sendUploadNotificationsLambdaFunction
The Error seems to stem from the way that I am referencing the FunctionName under the sendUploadNotificationsPermission resource.
I've tried different ways of referencing the function name, but to no avail. I still get the same error.
My serverless.yml file
service: serverless-udagram2
frameworkVersion: '2'
provider:
name: aws
runtime: nodejs12.x
lambdaHashingVersion: 20201221
stage: ${opt:stage, 'dev'}
region: ${opt:region, 'ap-southeast-1'}
environment:
GROUPS_TABLE: groups-${self:provider.stage}
IMAGES_TABLE: images-${self:provider.stage}
IMAGE_ID_INDEX: ImageIdIndex
IMAGES_S3_BUCKET: branded-serverless-udagram-images-${self:provider.stage}
SIGNED_URL_EXPIRATION: 300
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Scan
- dynamodb:PutItem
- dynamodb:GetItem
- dynamodb:Query
Resource: arn:aws:dynamodb:${self:provider.region}:*:table/${self:provider.environment.GROUPS_TABLE}
- Effect: Allow
Action:
- dynamodb:PutItem
- dynamodb:Query
Resource: arn:aws:dynamodb:${self:provider.region}:*:table/${self:provider.environment.IMAGES_TABLE}
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:PutItem
Resource: arn:aws:dynamodb:${self:provider.region}:*:table/${self:provider.environment.IMAGES_TABLE}/index/${self:provider.environment.IMAGE_ID_INDEX}
- Effect: Allow
Action:
- s3:PutObject
- s3:GetObject
Resource: arn:aws:s3:::${self:provider.environment.IMAGES_S3_BUCKET}/*
functions:
getGroups:
handler: src/lambda/http/getGroups.handler
events:
- http:
path: groups
method: get
cors: true
createGroup:
handler: src/lambda/http/createGroup.handler
events:
- http:
path: groups
method: post
cors: true
request:
schema:
application/json: ${file(models/create-group-request.json)}
getImages:
handler: src/lambda/http/getImages.handler
events:
- http:
path: groups/{groupId}/images
method: get
cors: true
getImage:
handler: src/lambda/http/getImage.handler
events:
- http:
path: images/{imageId}
method: get
cors: true
createImage:
handler: src/lambda/http/createImage.handler
events:
- http:
path: groups/{groupId}/images
method: post
cors: true
request:
schema:
application/json: ${file(models/create-image-request.json)}
sendUploadNotifications:
handler: src/lambda/s3/sendNotifications.handler
resources:
Resources:
# API gateway validates the request in accordance with json schemas that are identified in the function section under schema
RequestBodyValidator:
Type: AWS::ApiGateway::RequestValidator
Properties:
Name: 'request-body-validator'
RestApiId:
Ref: ApiGatewayRestApi
ValidateRequestBody: true
ValidateRequestParameters: true
GroupsDynamoDBTable:
Type: AWS::DynamoDB::Table
Properties:
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
BillingMode: PAY_PER_REQUEST
TableName: ${self:provider.environment.GROUPS_TABLE}
ImagesDynamoDBTable:
Type: AWS::DynamoDB::Table
Properties:
AttributeDefinitions:
- AttributeName: groupId
AttributeType: S
- AttributeName: timestamp
AttributeType: S
- AttributeName: imageId
AttributeType: S
KeySchema:
- AttributeName: groupId
KeyType: HASH #partition key
- AttributeName: timestamp
KeyType: RANGE #sort key
GlobalSecondaryIndexes:
- IndexName: ${self:provider.environment.IMAGE_ID_INDEX}
KeySchema:
- AttributeName: imageId
KeyType: HASH
Projection:
ProjectionType: ALL
BillingMode: PAY_PER_REQUEST
TableName: ${self:provider.environment.IMAGES_TABLE}
# Bucket for file uploads
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:provider.environment.IMAGES_S3_BUCKET}
NotificationConfiguration: # Sends notification when image has been uploaded
LambdaConfigurations: #
- Event: s3:ObjectCreated:*
Function: !GetAtt sendUploadNotificationsLambdaFunction.Arn
CorsConfiguration:
CorsRules:
-
AllowedOrigins:
- "*"
AllowedHeaders:
- "*"
AllowedMethods:
- 'GET'
- 'PUT'
- 'POST'
- 'DELETE'
- 'HEAD'
MaxAge: 3000
sendUploadNotificationsPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !GetAtt sendUploadNotificationsLambdaFunction.Arn
Action: lambda:InvokeFunction
Principal: s3.amazonaws.com
SourceAccount: !Ref AWS::AccountId #!Ref
SourceArn: arn:aws:s3:::${self:provider.environment.IMAGES_S3_BUCKET}
BucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
PolicyDocument:
Id: MyPolicy
Version: "2012-10-17"
Statement:
- Sid: PublicReadForGetBucketObjects
Effect: Allow
Principal: '*'
Action: 's3:GetObject'
Resource: 'arn:aws:s3:::${self:provider.environment.IMAGES_S3_BUCKET}/*'
Bucket:
Ref: AttachmentsBucket
I've tried changing the name of the function in both the sendUploadNotificationsPermission and the AttachmentsBucket by appending LamdaFunction to the end of the function name, but still getting the same error.
Any help with this error would be appreciated.
You are trying to reference something which doesn't exist in the template at in the CloudFormation section Resource.
sendUploadNotificationsLambdaFunction
In case you want to reference any of the function you have defined named
sendUploadNotifications
you need to construct the ARN inside the Resources section.
To generate Logical ID for CloudFormation, the plugin transform the specified name in serverless.yml based on the following scheme.
Transform a leading character into uppercase
Transform - into Dash
Transform _ into Underscore
SendUploadNotificationsLambdaFunction in your case.
There are now two ways:
You reference this inside Resource section of the template:
sendUploadNotificationsPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !GetAtt SendUploadNotificationsLambdaFunction.Arn
Action: lambda:InvokeFunction
Principal: s3.amazonaws.com
SourceAccount: !Ref AWS::AccountId #!Ref
SourceArn: arn:aws:s3:::${self:provider.environment.IMAGES_S3_BUCKET}
You construct the ARN using Fn::Join.
sendUploadNotificationsPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !Join [":", ['arn:aws:lambda', !Ref 'AWS::Region', !Ref AWS::AccountId, '${self:service}-${self:provider.stage}-sendUploadNotifications']]
Action: lambda:InvokeFunction
Principal: s3.amazonaws.com
SourceAccount: !Ref AWS::AccountId #!Ref
SourceArn: arn:aws:s3:::${self:provider.environment.IMAGES_S3_BUCKET}
Solved it. The 'references undefined resource error' was caused by the fact that after the serverless.yml file compiles it capitalizes the function name.
sendUploadNotifications becomes SendUploadNotificationsLambdaFunction
changed:
FunctionName: !Ref sendUploadNotificationsLambdaFunction
to:
FunctionName: !Ref SendUploadNotificationsLambdaFunction
It now deploys without an issue.
Related
I have the following SAM template:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
test lambda
Globals:
Function:
Timeout: 3
Tracing: Active
Api:
TracingEnabled: True
Resources:
NotesFunction:
Type: AWS::Serverless::Function
Properties:
PackageType: Zip
CodeUri: notes/
Handler: app.lambdaHandler
Runtime: nodejs18.x
Policies:
- AmazonDynamoDBFullAccess
Architectures:
- x86_64
Events:
FetchNotes:
Type: Api
Properties:
Path: /notes
Method: get
GiveNotes:
Type: Api
Properties:
Path: /notes
Method: post
Users:
Type: Api
Properties:
Path: /notes/users
Method: get
Metadata:
BuildMethod: esbuild
BuildProperties:
Minify: true
Target: "es2020"
Sourcemap: true
EntryPoints:
- app.ts
KmsKey:
Type: AWS::KMS::Key
Properties:
Description: CMK for encrypting and decrypting
KeyPolicy:
Version: '2012-10-17'
Id: key-default-1
Statement:
- Sid: Enable IAM User Permissions
Effect: Allow
Principal:
AWS: !Sub arn:aws:iam::${AWS::AccountId}:root
Action: kms:*
Resource: '*'
- Sid: Allow administration of the key
Effect: Allow
Principal:
AWS: arn:aws:iam::<MY_ACCOUNT>:role/aws-service-role/cks.kms.amazonaws.com/KMSKeyAdminRole
Action:
- kms:Create*
- kms:Describe*
- kms:Enable*
- kms:List*
- kms:Put*
- kms:Update*
- kms:Revoke*
- kms:Disable*
- kms:Get*
- kms:Delete*
- kms:ScheduleKeyDeletion
- kms:CancelKeyDeletion
Resource: '*'
- Sid: Allow use of the key
Effect: Allow
Principal:
AWS: !Ref NotesFunctionRole
Action:
- kms:DescribeKey
- kms:Encrypt
- kms:Decrypt
- kms:ReEncrypt*
- kms:GenerateDataKey
- kms:GenerateDataKeyWithoutPlaintext
Resource: '*'
NotesDynamoDB:
Type: AWS::DynamoDB::Table
Properties:
TableName: experimental-notes
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
StreamSpecification:
StreamViewType: NEW_IMAGE
Outputs:
NotesApi:
Description: "API Gateway endpoint URL for dev stage for Notes function"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/dev/notes/"
NotesFunction:
Description: "Notes Lambda Function ARN"
Value: !GetAtt NotesFunction.Arn
NotesFunctionIamRole:
Description: "Implicit IAM Role created for Notes function"
Value: !GetAtt NotesFunctionRole.Arn
NotesDynamoDB:
Description: "DynamoDB table backing the Lambda"
Value: !GetAtt NotesDynamoDB.Arn
When I build + deploy this template I get the following CloudFormation errors:
Resource handler returned message: "Policy contains a statement with one or more invalid principals....
Obviously I have redacted my actual account ID and replaced it with <MY_ACCOUNT> (!).
But it doesn't say what's "invalid" about which principals. The idea is that the 2nd policy statement gets applied/hardcoded to an existing role (KMSKeyAdminRole). And that the 3rd Statement gets applied to the role of the NotesFunction Lambda created up above.
Can anyone spot where I'm going awry?
This ended up working perfectly and fixing the CF error:
- Sid: Allow use of the key
Effect: Allow
Principal:
AWS: !GetAtt FeedbackFunctionRole.Arn
Action:
- kms:DescribeKey
- kms:Encrypt
- kms:Decrypt
- kms:ReEncrypt*
- kms:GenerateDataKey
- kms:GenerateDataKeyWithoutPlaintext
Resource: '*'
Use this https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_testing-policies.html to test if your policies are valid
i am trying to deploy below stack using sam template where it supposed to deploy lambda and would add a s3 trigger, but iam getting following error
Getting ValidationError when calling the CreateChangeSet operation: Template error: instance of Fn::GetAtt references undefined resource"
i am not sure whats went wrong here to get such error
yml template
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
Environment:
Type: String
S3:
Type: String
Key:
Type: String
SecretMgr:
Type: String
Resources:
LambdaS3ToKinesis:
Type: AWS::Serverless::Function
Properties:
Handler: lambda_function.lambda_handler
Runtime: python3.7
Timeout: 60
FunctionName: !Sub "my_s3_to_kinesis"
CodeUri: ./test/src
Role: !GetAtt testKinesisRole.Arn
Description: "My lambda"
Environment:
Variables:
KINESIS_STREAM: !Sub "test_post_kinesis"
DDB_TRACKER_TABLE: my_tracker_table
ENVIRONMENT: !Sub "${Environment}"
BUCKET_NAME: !Sub "${S3}"
Events:
FileUpload:
Type: S3
Properties:
Bucket: !Sub "${S3}"
Events: s3:ObjectCreated:*
Filter:
S3Key:
Rules:
- Name: prefix
Value: "${Environment}/test1/INPUT/"
- Name: suffix
Value: ".json"
- Name: prefix
Value: "${Environment}/test2/INPUT/"
- Name: suffix
Value: ".json"
LambdaTest1KinesisToDDB:
Type: AWS::Serverless::Function
Properties:
Handler: lambda_function.lambda_handler
Runtime: python3.7
Timeout: 60
FunctionName: !Sub "${Environment}_test1_to_ddb"
CodeUri: test1_kinesis_to_ddb/src/
Role: !GetAtt testKinesisToDDBRole.Arn
Description: "test post kinesis"
Layers:
- !Ref LambdaLayertest1
Environment:
Variables:
BUCKET_NAME: !Sub "${S3}"
DDB_ACC_PLCY_TABLE:test1
DDB_TRACKER_TABLE: test_tracker
ENVIRONMENT: !Sub "${Environment}"
S3_INVALID_FOLDER_PATH: invalid_payload/
S3_RAW_FOLDER_PATH: raw_payload/
S3_UPLOAD_FLAG: false
Events:
KinesisEvent:
Type: Kinesis
Properties:
Stream: !GetAtt Kinesistest1.Arn
StartingPosition: LATEST
BatchSize: 1
Enabled: true
MaximumRetryAttempts: 0
LambdaLayerTest1KinesisToDDB:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: !Sub "${Environment}_test1_kinesis_to_ddb_layer"
ContentUri: test1_kinesis_to_ddb/dependencies/
CompatibleRuntimes:
- python3.7
Metadata:
BuildMethod: python3.7
testKinesisRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Sub "${Environment}_s3_to_kinesis_role"
Description: Role for first lambda
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- s3.amazonaws.com
- lambda.amazonaws.com
Action:
- "sts:AssumeRole"
Policies:
- PolicyName: !Sub "${Environment}_s3_to_kinesis_policy"
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- s3:PutObject
- s3:GetObject
- s3:DeleteObject
Resource:
- !Sub "arn:aws:s3:::${S3}/*"
- !Sub "arn:aws:s3:::${S3}"
- Effect: Allow
Action:
- kinesis:PutRecord
Resource:
- !Sub "arn:aws:kinesis:${AWS::Region}:${AWS::AccountId}:mystream1/${Environment}_test1"
- !Sub "arn:aws:kinesis:${AWS::Region}:${AWS::AccountId}:mystream2/${Environment}_test2"
- Effect: Allow
Action:
- lambda:*
- cloudwatch:*
Resource: "*"
- Effect: Allow
Action:
- dynamodb:Put*
- dynamodb:Get*
- dynamodb:Update*
- dynamodb:Query
Resource:
- !GetAtt Dynamomytracker.Arn
- Effect: Allow
Action:
- kms:*
Resource:
- !Sub "${Key}"
testKinesisToDDBRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Sub "${Environment}_test1_to_ddb_role"
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- kinesis.amazonaws.com
- lambda.amazonaws.com
Action:
- "sts:AssumeRole"
ManagedPolicyArns:
- "arn:aws:iam::aws:test/service-role/AWSLambdaBasicExecutionRole"
Policies:
- PolicyName: !Sub "${Environment}_test1_kinesis_to_ddb_policy"
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- s3:PutObject
- s3:GetObject
- s3:DeleteObject
Resource:
- !Sub "arn:aws:s3:::${S3}/*"
- !Sub "arn:aws:s3:::${S3}"
- Effect: Allow
Action:
- kinesis:Get*
- kinesis:List*
- kinesis:Describe*
Resource:
- !GetAtt KinesisTest1.Arn
- !GetAtt KinesisTest2.Arn
- Effect: Allow
Action:
- dynamodb:Put*
- dynamodb:Get*
- dynamodb:Describe*
- dynamodb:List*
- dynamodb:Update*
- dynamodb:Query
- dynamodb:DeleteItem
- dynamodb:BatchGetItem
- dynamodb:BatchWriteItem
- dynamodb:Scan
Resource:
- !Sub
- "${Table}*"
- { Table: !GetAtt "Dynamotest.Arn" }
- !Sub
- "${Table}*"
- { Table: !GetAtt "Dynamotest.Arn" }
- Effect: Allow
Action:
- kms:*
Resource:
- !Sub "${Key}"
######################################
# Update for TEst2
######################################
KinesisTest2:
Type: AWS::Kinesis::Stream
Properties:
Name: !Sub ${Environment}_test2_kinesis
StreamEncryption:
EncryptionType: KMS
KeyId: !Sub "${Key}"
RetentionPeriodHours: 24
ShardCount: 1
LambdaLayerTest2KinesisToDDB:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: !Sub "${Environment}_test2_kinesis_to_ddb_layer"
ContentUri: test2_kinesis_to_ddb/dependencies/
CompatibleRuntimes:
- python3.7
Metadata:
BuildMethod: python3.7
LambdaTest2KinesisToDDB:
Type: AWS::Serverless::Function
Properties:
Handler: lambda_function.lambda_handler
Runtime: python3.7
Timeout: 60
FunctionName: !Sub "${Environment}_Test2_kinesis_to_ddb"
CodeUri: Test2_kinesis_to_ddb/src/
Role: !GetAtt testKinesisToDDBRole.Arn
Description: "Test2"
Layers:
- !Ref LambdaLayerTest2KinesisToDDB
Environment:
Variables:
BUCKET_NAME: !Sub "${S3}"
DDB_ACC_PLCY_TABLE: my_table2
DDB_TRACKER_TABLE: my_log
ENVIRONMENT: !Sub "${Environment}"
S3_INVALID_FOLDER_PATH: invalid_payload/
S3_RAW_FOLDER_PATH: raw_payload/
S3_UPLOAD_FLAG: false
Events:
KinesisEvent:
Type: Kinesis
Properties:
Stream: !GetAtt KinesisTest2.Arn
StartingPosition: LATEST
BatchSize: 1
Enabled: true
MaximumRetryAttempts: 0
can anybody help me how can resolve this? i am not sure what exactly missed in the template and how to resolve this error
You are using AWS Serverless Application Model and your template does not conform to its format. For example, its missing required Transform statement:
Transform: AWS::Serverless-2016-10-31
There could be many other things wrong, as your template is nor CloudFormation nor Serverless at this point.
Facing Syntax IamRoleLambdaExecution - Syntax errors in policy. (Service: AmazonIdentityManagement; Status Code: 400; Error Code: MalformedPolicyDocument; Request ID: ********-****-****-****-************).
for the below serverless.yml file.
plugins:
- serverless-pseudo-parameters
provider:
name: aws
runtime: nodejs8.10
iamRoleStatements:
- Effect: Allow
Action:
- "dynamodb:PutItem"
- "dynamodb:GetItem"
Resource:
- arn:aws:dynamodb:#{AWS::Region}:#{AWS::AccountId}:table/ordersTable
- Effect: Allow
Action:
- kinesis: "PutRecord"
Resource:
- arn:aws:kinesis:#{AWS::Region}:#{AWS::AccountId}:stream/order-events
functions:
createOrder:
handler: handler.createOrder
events:
- http:
path: /order
method: post
environment:
orderTableName: ordersTable
orderStreamName: order-events
resources:
Resources:
orderEventsStream:
Type: AWS::Kinesis::Stream
Properties:
Name: order-events
ShardCount: 1
orderTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ordersTable
AttributeDefinitions:
- AttributeName: "orderId"
AttributeType: "S"
KeySchema:
- AttributeName: "orderId"
KeyType: "HASH"
BillingMode: PAY_PER_REQUEST```
serverless details:
- Framework Core: 1.71.3
- Plugin: 3.6.12
- SDK: 2.3.0
- Components: 2.30.11
Based on OP's feedback in the comment, changing kinesis: "PutRecord" to "kinesis: PutRecord" should work.
The following is the template.yaml for the lambda function. I'm trying to add permissions to access the status database. However, it needs the database to exist and vice versa, and so I get a circular dependency error with DynamoDBIamPolicy. How can I resolve this?
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function.
Resources:
friendTeachers:
Type: 'AWS::Serverless::Function'
Properties:
Handler: friendTeachers/index.handler
Runtime: nodejs6.10
Description: ''
MemorySize: 128
Timeout: 15
status:
Type: 'AWS::DynamoDB::Table'
Properties:
TableName: status
AttributeDefinitions:
- AttributeName: screenName
AttributeType: S
KeySchema:
- AttributeName: screenName
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
# A policy is a resource that states one or more permssions. It lists actions, resources and effects.
DynamoDBIamPolicy:
Type: 'AWS::IAM::Policy'
DependsOn: status
Properties:
PolicyName: lambda-dynamodb
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
- dynamodb:batchWriteItem
Resource: arn:aws:dynamodb:*:*:table/status
Roles:
- Ref: IamRoleLambdaExecution
You are missing a role where you specify that the lambda service can AssumeRole. The role needs to have a policy associated that specifies the operations that can be done in the DynamoDb table. Find below an example that shows what you are trying to accomplish:
---
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function.
Resources:
friendTeachersFunction:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket:
Ref: LambdaCodeBucket
S3Key:
Ref: LambdaCodePath
Handler: friendTeachers/index.handler
Runtime: "nodejs6.10"
Description: ''
MemorySize: 128
Timeout: 15
Role:
Fn::GetAtt:
- friendTeachersExecutionRole
- Arn
friendTeachersExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- sts:AssumeRole
Policies:
- PolicyName: UseDBPolicy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
- dynamodb:batchWriteItem
Resource: arn:aws:dynamodb:*:*:table/status
APIDynamoDBTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: status
AttributeDefinitions:
- AttributeName: screenName
AttributeType: S
KeySchema:
- AttributeName: screenName
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
Note that Code.S3Bucket and Code.S3Key are defined as parameters. When you create the stack in AWS Console you can specify the path there.
I want to use more than one dynamodb table in my serverless project. How do I properly define multiple resources in the iamrolestatements?
I have an example serverless.yml
service: serverless-expense-tracker
frameworkVersion: ">=1.1.0 <2.0.0"
provider:
name: aws
runtime: nodejs6.10
environment:
EXPENSES_TABLE: "${self:service}-${opt:stage, self:provider.stage}-expenses"
BUDGETS_TABLE: "${self:service}-${opt:stage, self:provider.stage}-budgets"
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: "arn:aws:dynamodb:${opt:region, self:provider.region}:*:table/${self:provider.environment.EXPENSES_TABLE}"
# what is the best way to add the other DB as a resource
functions:
create:
handler: expenseTracker/create.create
events:
- http:
path: expenses
method: post
cors: true
list:
handler: expenseTracker/list.list
events:
- http:
path: expenses
method: get
cors: true
get:
handler: expenseTracker/get.get
events:
- http:
path: expenses/{id}
method: get
cors: true
update:
handler: expenseTracker/update.update
events:
- http:
path: expenses/{id}
method: put
cors: true
delete:
handler: expenseTracker/delete.delete
events:
- http:
path: expenses/{id}
method: delete
cors: true
resources:
Resources:
DynamoDbExpenses:
Type: 'AWS::DynamoDB::Table'
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
-
AttributeName: id
AttributeType: S
KeySchema:
-
AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: ${self:provider.environment.EXPENSES_TABLE}
DynamoDbBudgets:
Type: 'AWS::DynamoDB::Table'
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
-
AttributeName: id
AttributeType: S
KeySchema:
-
AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: ${self:provider.environment.BUDGETS_TABLE}
You can see the area in question in the comments there.
I got it!
The key was just adding a list under the key - Resource, but I also learned that it's better to just use the logicalIDs you use when provisioning the tables. Full example to follow:
service: serverless-expense-tracker
frameworkVersion: ">=1.1.0 <2.0.0"
provider:
name: aws
runtime: nodejs6.10
environment:
EXPENSES_TABLE: { "Ref": "DynamoDbExpenses" } #DynamoDbExpenses is a logicalID also used when provisioning below
BUDGETS_TABLE: { "Ref": "DynamoDbBudgets" }
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource:
- { "Fn::GetAtt": ["DynamoDbExpenses", "Arn"] } #you will also see the logical IDs below where they are provisioned
- { "Fn::GetAtt": ["DynamoDbBudgets", "Arn"] }
functions:
create:
handler: expenseTracker/create.create
events:
- http:
path: expenses
method: post
cors: true
createBudget:
handler: expenseTracker/createBudget.createBudget
events:
- http:
path: budgets
method: post
cors: true
list:
handler: expenseTracker/list.list
events:
- http:
path: expenses
method: get
cors: true
listBudgets:
handler: expenseTracker/listBudgets.listBudgets
events:
- http:
path: budgets
method: get
cors: true
get:
handler: expenseTracker/get.get
events:
- http:
path: expenses/{id}
method: get
cors: true
update:
handler: expenseTracker/update.update
events:
- http:
path: expenses/{id}
method: put
cors: true
delete:
handler: expenseTracker/delete.delete
events:
- http:
path: expenses/{id}
method: delete
cors: true
resources:
Resources:
DynamoDbExpenses: #this is where the logicalID is defined
Type: 'AWS::DynamoDB::Table'
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
-
AttributeName: id
AttributeType: S
KeySchema:
-
AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
DynamoDbBudgets: #here too
Type: 'AWS::DynamoDB::Table'
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
-
AttributeName: id
AttributeType: S
KeySchema:
-
AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
I'd like to put my updates since I spend time and learn a lot from this question. The currently accepted answer is not fully functioned.
What I added:
1) Make sure in your handler, there is an environment TABLE_NAME(or another name, you can adjust accordingly) as below, it is referring the lambda function's environment variables
const params = {
TableName: process.env.TABLE_NAME,
Item: {
...
}
}
2) update serverless.yml to nominate table name to each function.
environment:
TABLE_NAME: { "Ref": "DynamoDbExpenses" }
or
environment:
TABLE_NAME: { "Ref": "DynamoDbBudgets" }
Depend on which table the function targets.
The full serverless.yml is updated here:
service: serverless-expense-tracker
frameworkVersion: ">=1.1.0 <2.0.0"
provider:
name: aws
runtime: nodejs6.10
environment:
EXPENSES_TABLE: { "Ref": "DynamoDbExpenses" } #DynamoDbExpenses is a logicalID also used when provisioning below
BUDGETS_TABLE: { "Ref": "DynamoDbBudgets" }
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource:
- { "Fn::GetAtt": ["DynamoDbExpenses", "Arn"] } #you will also see the logical IDs below where they are provisioned
- { "Fn::GetAtt": ["DynamoDbBudgets", "Arn"] }
functions:
create:
handler: expenseTracker/create.create
environment:
TABLE_NAME: { "Ref": "DynamoDbExpenses" }
events:
- http:
path: expenses
method: post
cors: true
createBudget:
handler: expenseTracker/createBudget.createBudget
environment:
TABLE_NAME: { "Ref": "DynamoDbBudgets" }
events:
- http:
path: budgets
method: post
cors: true
list:
handler: expenseTracker/list.list
environment:
TABLE_NAME: { "Ref": "DynamoDbExpenses" }
events:
- http:
path: expenses
method: get
cors: true
listBudgets:
handler: expenseTracker/listBudgets.listBudgets
environment:
TABLE_NAME: { "Ref": "DynamoDbBudgets" }
events:
- http:
path: budgets
method: get
cors: true
get:
handler: expenseTracker/get.get
environment:
TABLE_NAME: { "Ref": "DynamoDbExpenses" }
events:
- http:
path: expenses/{id}
method: get
cors: true
update:
handler: expenseTracker/update.update
environment:
TABLE_NAME: { "Ref": "DynamoDbExpenses" }
events:
- http:
path: expenses/{id}
method: put
cors: true
delete:
handler: expenseTracker/delete.delete
environment:
TABLE_NAME: { "Ref": "DynamoDbExpenses" }
events:
- http:
path: expenses/{id}
method: delete
cors: true
resources:
Resources:
DynamoDbExpenses: #this is where the logicalID is defined
Type: 'AWS::DynamoDB::Table'
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
-
AttributeName: id
AttributeType: S
KeySchema:
-
AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: ${self:service}-${opt:stage, self:provider.stage}-expenses
DynamoDbBudgets: #here too
Type: 'AWS::DynamoDB::Table'
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
-
AttributeName: id
AttributeType: S
KeySchema:
-
AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: ${self:service}-${opt:stage, self:provider.stage}-budgets
Refer:
serverless environment variables
If your intention is to provide access to all tables in the stack that is being deployed, you can use:
Resource: !Sub arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${AWS::StackName}-*
This way the lambdas in your stack are limited to tables in your stack, and you won't have to update this every time you add a table.