AWS documentation models and response handling? - amazon-web-services

i'm trying to figure out the best way to define response/request models for my Lambda functions and API Gateway.
What am I trying to achieve?
What I want is to store all my models within my git repo for requests and responses, these will be deployed to API Gateway using the serverless-aws-documentation plugin.
So let's say there is a 500/400 error, the response would be formatted using this model, not the code. Let's focus on the error response, this should be like so:
"Error": {
"type": "object",
"properties": {
"message": {
"type": "string"
},
"type": {
"type": "string"
},
"request-id": {
"type": "string"
}
}
}
What I currently have
Below is what I have right now, which creates the models on my API Gateway just fine, however i'm currently formatting the response within code, ideally I would just throw an exception here and let the model handle for formatting?
unresolvedVariablesNotificationMode: error
useDotenv: true
provider:
name: aws
profile: ${opt:aws-profile, 'sandbox'}
region: eu-west-2
stage: ${opt:stage, 'dev'}
lambdaHashingVersion: 20201221
environment:
TABLE_PREFIX: ${self:provider.stage}-${self:service}-
API_ROOT: ${self:custom.domains.${self:provider.stage}}
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource:
- Fn::GetAtt:
- enquiriesTable
- Arn
- Fn::GetAtt:
- vehiclesTable
- Arn
plugins:
- serverless-api-gateway-throttling
- serverless-associate-waf
# - serverless-domain-manager
- serverless-reqvalidator-plugin
- serverless-aws-documentation
- serverless-webpack
- serverless-dynamodb-local
- serverless-offline
custom:
apiGatewayThrottling:
maxRequestsPerSecond: 10
maxConcurrentRequests: 5
webpack:
includeModules: true
dynamodb:
stages: [ dev ]
start:
migrate: true
documentation:
models:
-
name: "ErrorResponse"
description: "This is how an error would return"
contentType: "application/json"
schema: ${file(models/Error.json)}
functions:
createEnquiry:
handler: src/services/enquiries/create.handler
environment:
API_ROOT: ${self:custom.domains.${self:provider.stage}}
events:
- http:
path: /
method: POST
reqValidatorName: onlyParameters
documentation:
requestModels:
"application/json": "CreateRequest"
methodResponses:
-
statusCode: "400"
responseModels:
"application/json": "ErrorResponse"
resources:
Resources:
onlyParameters:
Type: "AWS::ApiGateway::RequestValidator"
Properties:
Name: "only-parameters"
RestApiId:
Ref: ApiGatewayRestApi
ValidateRequestBody: true
ValidateRequestParameters: true
enquiriesTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:provider.stage}-${self:service}-enquiries
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
BillingMode: PAY_PER_REQUEST
vehiclesTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:provider.stage}-${self:service}-vehicles
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
BillingMode: PAY_PER_REQUEST
Can someone point me in the right direction to achieve what i'm attempting. So to have the models defined within the repo, and have these define the response rather than the Node.
Update
Just attempted the following from this link:
functions:
createEnquiry:
handler: src/services/enquiries/create.handler
environment:
API_ROOT: ${self:custom.domains.${self:provider.stage}}
events:
- http:
path: /
method: POST
reqValidatorName: onlyParameters
x-amazon-apigateway-integration:
responses:
".*httpStatus\\\":404.*":
statusCode: 404,
responseTemplates:
application/json: "#set ($errorMessageObj = $util.parseJson($input.path('$.errorMessage')))\n#set ($bodyObj = $util.parseJson($input.body))\n{\n \"type\" : \"$errorMessageObj.errorType\",\n \"message\" : \"$errorMessageObj.message\",\n \"request-id\" : \"$errorMessageObj.requestId\"\n}"
export const handler: APIGatewayProxyHandler = async (event) => {
return {
statusCode: 400,
body: 'Enquiry already exists'
}
This still just returned the following response as a string:
Enquiryalreadyexists

Related

AWS-LAMBDA: no such file or directory, open './src/graphql/schema.graphql'

I am practicing Graphql and AWS. I used simple serverless framework also create simple Graphql schema. I deployed the schema(seems like this graphql.schema file does not deploy), resolvers to AWS. It successfully create a DynamoDB table and lambda. I can able make POST/GET request via Graphql Playground by using serverless-offline. But the issue is the the api end-point does not work. It's show me internal server error. I was investigating the issue. From cloud watch I found the local schema which I created the Lambda function did find the graphql.schema. This is the error I am getting "ENOENT: no such file or directory, open './src/graphql/schema.graphql'".
This is the lambda error Image
This is my lambda function
import { ApolloServer } from 'apollo-server-lambda';
import { ApolloServerPluginLandingPageGraphQLPlayground } from 'apollo-server-core';
import runWarm from '../utils/run-warm';
import fs from 'fs';
const schema = fs.readFileSync('./src/graphql/schema.graphql', 'utf8'); // This is local my schema
import resolvers from '../resolvers';
const server = new ApolloServer({
typeDefs: schema,
resolvers,
introspection: true,
plugins: [ApolloServerPluginLandingPageGraphQLPlayground()],
});
export default runWarm(
server.createHandler({
expressGetMiddlewareOptions: {
cors: {
origin: '*',
credentials: true,
allowedHeaders: ['Content-Type', 'Origin', 'Accept'],
optionsSuccessStatus: 200,
},
},
})
);
This is my serverless YAML file
service: serverless-aws-graphql
package:
individually: true
provider:
name: aws
profile: ${env:profile}
runtime: nodejs14.x
stage: ${env:stage}
region: eu-north-1
timeout: 30
apiName: ${self:service.name}-${self:provider.stage}
environment:
ITEM_TABLE: ${self:service}-items-${self:provider.stage}
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: 'arn:aws:dynamodb:${opt:region, self:provider.region}:*:table/${self:provider.environment.ITEM_TABLE}'
apiGateway:
shouldStartNameWithService: true
custom:
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
packager: 'npm' # Packager that will be used to package your external modules
warmup:
enabled: true
events:
- schedule: rate(5 minutes)
prewarm: true
concurrency: 1
prune:
automatic: true
number: 5
functions:
graphql:
handler: src/handlers/graphql.default
events:
- http:
path: ${env:api_prefix}/graphql
method: any
cors: true
resources:
Resources:
ItemsTable:
Type: 'AWS::DynamoDB::Table'
Properties:
AttributeDefinitions:
- AttributeName: PK
AttributeType: S
- AttributeName: SK
AttributeType: S
- AttributeName: GSI1PK
AttributeType: S
- AttributeName: GSI1SK
AttributeType: S
KeySchema:
- AttributeName: PK
KeyType: HASH
- AttributeName: SK
KeyType: RANGE
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
GlobalSecondaryIndexes:
- IndexName: GSI1
KeySchema:
- AttributeName: GSI1PK
KeyType: HASH
- AttributeName: GSI1SK
KeyType: RANGE
Projection:
ProjectionType: ALL
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: ${self:provider.environment.ITEM_TABLE}
plugins:
- serverless-webpack
- serverless-offline
- serverless-plugin-warmup
- serverless-dotenv-plugin
- serverless-prune-plugin
This is my webpack.config.js setup
const nodeExternals = require('webpack-node-externals');
const slsw = require('serverless-webpack');
module.exports = {
entry: slsw.lib.entries,
target: 'node',
mode: slsw.lib.webpack.isLocal ? 'development' : 'production',
externals: [nodeExternals()],
module: {
rules: [
{
test: /\.tsx?$/,
use: 'ts-loader',
exclude: /node_modules/,
},
],
},
resolve: {
extensions: ['.tsx', '.ts', '.js', '.jsx'],
},
};
This my tsconfig setup
{
"compilerOptions": {
"target": "esnext",
"allowJs": true,
"skipLibCheck": false,
"esModuleInterop": true,
"allowSyntheticDefaultImports": true,
"strict": true,
"forceConsistentCasingInFileNames": true,
"module": "esnext",
"moduleResolution": "node",
"resolveJsonModule": true,
"noEmit": false,
"jsx": "preserve",
"noUnusedLocals": true,
"noUnusedParameters": true
},
"include": [
"src/**/*"
],
"exclude": [
"node_modules",
]
}
As you observed, the ./src/graphql/schema.graphql isn't being packaged to the final artifact Serverless builds and deploys.
You can add it by specifying the package property to your function:
graphql:
handler: src/handlers/graphql.default
events:
- http:
path: ${env:api_prefix}/graphql
method: any
cors: true
package:
include:
- src/graphql/schema.graphql
Source: https://www.serverless.com/framework/docs/providers/aws/guide/packaging#patterns

Runtime.ImportModuleError: Error: Cannot find module 'onCreateRadonData'

I am trying to deploy a DynamoDB stream as a lambda function using AppSync and Serverless. The deployment goes well, without any error. But when I trigger the lambda creating a new instance in my DynamoDB table, it fails throwing this error:
{
"errorType": "Runtime.ImportModuleError",
"errorMessage": "Error: Cannot find module 'onCreateRadonData'\nRequire stack:\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js",
"stack": [
"Runtime.ImportModuleError: Error: Cannot find module 'onCreateRadonData'",
"Require stack:",
"- /var/runtime/UserFunction.js",
"- /var/runtime/index.js",
" at _loadUserApp (/var/runtime/UserFunction.js:100:13)",
" at Object.module.exports.load (/var/runtime/UserFunction.js:140:17)",
" at Object.<anonymous> (/var/runtime/index.js:43:30)",
" at Module._compile (internal/modules/cjs/loader.js:999:30)",
" at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)",
" at Module.load (internal/modules/cjs/loader.js:863:32)",
" at Function.Module._load (internal/modules/cjs/loader.js:708:14)",
" at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:60:12)",
" at internal/main/run_main_module.js:17:47"
]
}
It is strange because, usually, I see this happens when importing some modules/dependencies in a bad way. But the function mentioned in the error onCreateRadonData is the name of the lambda itself, and in the deployment process, it is clearly shown that the deployment of the function went well, so I do not know what is going on...
The serverless.yaml file:
service: aws
plugins:
- serverless-appsync-plugin
- serverless-offline
provider:
name: aws
runtime: nodejs12.x
region: eu-west-1
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Scan
- dynamodb:Query
- dynamodb:PutItem
Resource:
- !GetAtt RadonDataTable.Arn
- !Join [ '', [ !GetAtt RadonDataTable.Arn, '/*' ] ]
- Effect: Allow
Action:
- appsync:GraphQL
Resource:
- !GetAtt GraphQlApi.Arn
- !Join [ '/', [ !GetAtt GraphQlApi.Arn, 'types', 'Mutation', 'fields', 'createRadonData' ] ]
custom:
appSync:
name: ${self:service}
authenticationType: AWS_IAM
mappingTemplates:
- dataSource: RadonData
type: Query
field: listRadonData
request: Query.listRadonData.request.vtl
response: Query.listRadonData.response.vtl
- dataSource: None
type: Mutation
field: createRadonData
request: Mutation.createRadonData.request.vtl
response: Mutation.createRadonData.response.vtl
schema: src/schema.graphql
dataSources:
- type: NONE
name: None
- type: AMAZON_DYNAMODB
name: RadonData
description: 'DynamoDB Radon Data table'
config:
tableName: !Ref RadonDataTable
functions:
handleDynamoDbStream:
maximumRetryAttempts: 1
maximumRecordAgeInSeconds: 1
handler: src/handlers/onCreateRadonData.handler
environment:
APP_SYNC_API_URL: !GetAtt GraphQlApi.GraphQLUrl
events:
- stream:
type: dynamodb
arn: !GetAtt RadonDataTable.StreamArn
resources:
Resources:
RadonDataTable:
Type: AWS::DynamoDB::Table
Properties:
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
StreamSpecification:
StreamViewType: NEW_IMAGE
And the lambda function onCreateRadonData.ts:
export const handler = (event) => {
console.log('Hello from lambda')
return;
}
NOTE: I've also tried the exports.handler = (event) ... way, but I throws the same error.
The code structure goes like this:
- src/handlers/onCreateRadonData.ts
- mapping-templates/files with mapping templates.vtl
- serverless.yml
- package.json
As You can see I have only one file named onCreateRadonData.ts inside the handlers folder in src. Thats all, the rest of the files are in the root directory.
Any ideas of what I am doing wrong? thank You all!
Okey guys I got it. Since I am using typescript I have to import the serverless-typescript plugin to convert all the ts files to js.

IamRoleLambdaExecution - Syntax errors in policy

Facing Syntax IamRoleLambdaExecution - Syntax errors in policy. (Service: AmazonIdentityManagement; Status Code: 400; Error Code: MalformedPolicyDocument; Request ID: ********-****-****-****-************).
for the below serverless.yml file.
plugins:
- serverless-pseudo-parameters
provider:
name: aws
runtime: nodejs8.10
iamRoleStatements:
- Effect: Allow
Action:
- "dynamodb:PutItem"
- "dynamodb:GetItem"
Resource:
- arn:aws:dynamodb:#{AWS::Region}:#{AWS::AccountId}:table/ordersTable
- Effect: Allow
Action:
- kinesis: "PutRecord"
Resource:
- arn:aws:kinesis:#{AWS::Region}:#{AWS::AccountId}:stream/order-events
functions:
createOrder:
handler: handler.createOrder
events:
- http:
path: /order
method: post
environment:
orderTableName: ordersTable
orderStreamName: order-events
resources:
Resources:
orderEventsStream:
Type: AWS::Kinesis::Stream
Properties:
Name: order-events
ShardCount: 1
orderTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ordersTable
AttributeDefinitions:
- AttributeName: "orderId"
AttributeType: "S"
KeySchema:
- AttributeName: "orderId"
KeyType: "HASH"
BillingMode: PAY_PER_REQUEST```
serverless details:
- Framework Core: 1.71.3
- Plugin: 3.6.12
- SDK: 2.3.0
- Components: 2.30.11
Based on OP's feedback in the comment, changing kinesis: "PutRecord" to "kinesis: PutRecord" should work.

CloudFormation Template is invalid: Template format error: Every Outputs member must contain a Value object

I have an AWS IoT Chat Application whose UI is on React and to do the AWS configuration I have a setup which is executed using "serverless deploy" command. When executed, a serverless.yml gets executed, and it breaks at a point where it throws an error as
CloudFormation Template is invalid: Template format error: Every Outputs member must contain a Value object
the serverless.yml code is given below:
resources:
Resources:
UserTable:
Type: "AWS::DynamoDB::Table"
Properties:
TableName: "IotChatUsers"
AttributeDefinitions:
- AttributeName: identityId
AttributeType: S
KeySchema:
- AttributeName: identityId
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
ChatTable:
Type: "AWS::DynamoDB::Table"
Properties:
TableName: "IotChatChats"
AttributeDefinitions:
- AttributeName: name
AttributeType: S
KeySchema:
- AttributeName: name
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
ConnectPolicy:
Type: "AWS::IoT::Policy"
Properties:
PolicyName: IotChatConnectPolicy
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
- "iot:Connect"
Resource:
- "*"
PublicSubscribePolicy:
Type: "AWS::IoT::Policy"
Properties:
PolicyName: IotChatPublicSubscribePolicy
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
- "iot:Subscribe"
Resource: { "Fn::Join" : ["",["arn:aws:iot:",{"Ref":"AWS::Region"},":",{"Ref":"AWS::AccountId"},":topicfilter/room/public/*"]] }
PublicReceivePolicy:
Type: "AWS::IoT::Policy"
Properties:
PolicyName: IotChatPublicReceivePolicy
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
- "iot:Receive"
Resource: { "Fn::Join" : ["",["arn:aws:iot:",{"Ref":"AWS::Region"},":",{"Ref":"AWS::AccountId"},":topic/room/public/*"]] }
UserPool:
Type: "AWS::Cognito::UserPool"
Properties:
UserPoolName: iot_chat_api_user_pool
AutoVerifiedAttributes:
- email
MfaConfiguration: OFF
Schema:
- AttributeDataType: String
Name: email
Required: true
ReactAppClient:
Type: AWS::Cognito::UserPoolClient
Properties:
GenerateSecret: false
RefreshTokenValidity: 200
UserPoolId:
Ref: UserPool
IdentityPool:
Type: "AWS::Cognito::IdentityPool"
Properties:
IdentityPoolName: iot_chat_api_identity_pool
AllowUnauthenticatedIdentities: false
CognitoIdentityProviders:
- ClientId:
Ref: ReactAppClient
ProviderName:
Fn::GetAtt: UserPool.ProviderName
SupportedLoginProviders:
graph.facebook.com: ${self:custom.variables.facebook_app_id}
accounts.google.com: ${self:custom.variables.google_app_id}
IdentityPoolAuthRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Federated:
- "cognito-identity.amazonaws.com"
Action:
- "sts:AssumeRoleWithWebIdentity"
Condition:
StringEquals:
cognito-identity.amazonaws.com:aud:
Ref: IdentityPool
ForAnyValue:StringLike:
cognito-identity.amazonaws.com:amr: authenticated
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSIoTDataAccess
Path: "/"
Policies:
- PolicyName: iot-chat-invoke-api-gateway
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- execute-api:Invoke
Resource: { "Fn::Join" : ["", ["arn:aws:execute-api:",{"Ref":"AWS::Region"},":",{"Ref":"AWS::AccountId"},":",{"Ref":"ApiGatewayRestApi"},"/*"]] }
IdentityPoolRoleAttachment:
Type: AWS::Cognito::IdentityPoolRoleAttachment
Properties:
IdentityPoolId:
Ref: IdentityPool
Roles:
authenticated:
Fn::GetAtt:
- IdentityPoolAuthRole
- Arn
ConfirmUserInvocationPermission:
Type: AWS::Lambda::Permission
Properties:
Action: lambda:InvokeFunction
FunctionName:
Fn::GetAtt: AutoConfirmUserLambdaFunction.Arn
Principal: cognito-idp.amazonaws.com
SourceArn:
Fn::GetAtt: UserPool.Arn
Outputs:
UserPoolId:
Description: "The ID of the user pool that is created."
Value:
Ref: UserPool
ReactAppClientId:
Description: "The ID of the user pool react app client id."
Value:
Ref: ReactAppClient
IdentityPoolId:
Description: "The ID of the identity pool that is created."
Value:
Ref: IdentityPool
AutoConfirmUserFnArn:
Description: "The ARN of the Auto Confirm User Lambda function"
Value:
Fn::GetAtt:
- AutoConfirmUserLambdaFunction
- Arn
FacebookAppId:
Description: "Facebook App Id"
Value: ${self:custom.variables.facebook_app_id}
GoogleAppId:
Description: "Google App Id"
Value: ${self:custom.variables.google_app_id}
I need some insight to figure out what is wrong with the serverless.yml which throws me this validation error.
Environment Information -----------------------------
OS: win32
Node Version: 8.9.1
Serverless Version: 1.25.0
UPDATE:
On parsing the YAML below is the result of Outputs node:
"Outputs": {
"IdentityPoolId": {
"Description": "The ID of the identity pool that is created.",
"Value": {
"Ref": "IdentityPool"
}
},
"FacebookAppId": {
"Description": "Facebook App Id",
"Value": "${self:custom.variables.facebook_app_id}"
},
"ReactAppClientId": {
"Description": "The ID of the user pool react app client id.",
"Value": {
"Ref": "ReactAppClient"
}
},
"GoogleAppId": {
"Description": "Google App Id",
"Value": "${self:custom.variables.google_app_id}"
},
"UserPoolId": {
"Description": "The ID of the user pool that is created.",
"Value": {
"Ref": "UserPool"
}
},
"AutoConfirmUserFnArn": {
"Description": "The ARN of the Auto Confirm User Lambda function",
"Value": {
"Fn::GetAtt": [
"AutoConfirmUserLambdaFunction",
"Arn"
]
}
}
}
Update 2:
This is where the complete application comes from: aws-iot-chat-example
CloudFormation often provides vague or hard to track errors, and never reports errors with line numbers, like many interpreters/compilers/parsers. So tracking them down is often a process of trial and error.
In your case, the error message only mentioned that the error is in the Output section of the template, but it does not mention which value is the problem. You have 6 values in that section.
A good technique for troubleshooting is to remove each of the items one or 2 at a time, and re run the template. Since the output values are just that - only Outputs - they are not needed by this template, but instead expose data to other templates later in the creation process. Just remove them as suggested, and use this technique to isolate the field with the error in the value.
A good sanity check is the remove the entire Outputs section, and confirm that the reset of the template creates as expected.
Once you track down the fields(s) with the problem, you will need to track down the primary issue: Every Outputs member must contain a Value object
To resolve that, track down the object being referenced, and track back to the source resource or resource attribute. For some reason, those references do not refer to a valid object.
I will note that in your comments, you identified two fields that were both causing the error. Both seem to use a variable reference in the form of self:custom.variables.google_app_id - These value are not resolving properly. Check their source as above. I suspect they are not being parsed properly. I do not recognize that construction as valid CloudFormation syntax.

How do I define resources for iamrolestatements for multiple dynamodb tables in serverless framework?

I want to use more than one dynamodb table in my serverless project. How do I properly define multiple resources in the iamrolestatements?
I have an example serverless.yml
service: serverless-expense-tracker
frameworkVersion: ">=1.1.0 <2.0.0"
provider:
name: aws
runtime: nodejs6.10
environment:
EXPENSES_TABLE: "${self:service}-${opt:stage, self:provider.stage}-expenses"
BUDGETS_TABLE: "${self:service}-${opt:stage, self:provider.stage}-budgets"
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: "arn:aws:dynamodb:${opt:region, self:provider.region}:*:table/${self:provider.environment.EXPENSES_TABLE}"
# what is the best way to add the other DB as a resource
functions:
create:
handler: expenseTracker/create.create
events:
- http:
path: expenses
method: post
cors: true
list:
handler: expenseTracker/list.list
events:
- http:
path: expenses
method: get
cors: true
get:
handler: expenseTracker/get.get
events:
- http:
path: expenses/{id}
method: get
cors: true
update:
handler: expenseTracker/update.update
events:
- http:
path: expenses/{id}
method: put
cors: true
delete:
handler: expenseTracker/delete.delete
events:
- http:
path: expenses/{id}
method: delete
cors: true
resources:
Resources:
DynamoDbExpenses:
Type: 'AWS::DynamoDB::Table'
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
-
AttributeName: id
AttributeType: S
KeySchema:
-
AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: ${self:provider.environment.EXPENSES_TABLE}
DynamoDbBudgets:
Type: 'AWS::DynamoDB::Table'
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
-
AttributeName: id
AttributeType: S
KeySchema:
-
AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: ${self:provider.environment.BUDGETS_TABLE}
You can see the area in question in the comments there.
I got it!
The key was just adding a list under the key - Resource, but I also learned that it's better to just use the logicalIDs you use when provisioning the tables. Full example to follow:
service: serverless-expense-tracker
frameworkVersion: ">=1.1.0 <2.0.0"
provider:
name: aws
runtime: nodejs6.10
environment:
EXPENSES_TABLE: { "Ref": "DynamoDbExpenses" } #DynamoDbExpenses is a logicalID also used when provisioning below
BUDGETS_TABLE: { "Ref": "DynamoDbBudgets" }
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource:
- { "Fn::GetAtt": ["DynamoDbExpenses", "Arn"] } #you will also see the logical IDs below where they are provisioned
- { "Fn::GetAtt": ["DynamoDbBudgets", "Arn"] }
functions:
create:
handler: expenseTracker/create.create
events:
- http:
path: expenses
method: post
cors: true
createBudget:
handler: expenseTracker/createBudget.createBudget
events:
- http:
path: budgets
method: post
cors: true
list:
handler: expenseTracker/list.list
events:
- http:
path: expenses
method: get
cors: true
listBudgets:
handler: expenseTracker/listBudgets.listBudgets
events:
- http:
path: budgets
method: get
cors: true
get:
handler: expenseTracker/get.get
events:
- http:
path: expenses/{id}
method: get
cors: true
update:
handler: expenseTracker/update.update
events:
- http:
path: expenses/{id}
method: put
cors: true
delete:
handler: expenseTracker/delete.delete
events:
- http:
path: expenses/{id}
method: delete
cors: true
resources:
Resources:
DynamoDbExpenses: #this is where the logicalID is defined
Type: 'AWS::DynamoDB::Table'
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
-
AttributeName: id
AttributeType: S
KeySchema:
-
AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
DynamoDbBudgets: #here too
Type: 'AWS::DynamoDB::Table'
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
-
AttributeName: id
AttributeType: S
KeySchema:
-
AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
I'd like to put my updates since I spend time and learn a lot from this question. The currently accepted answer is not fully functioned.
What I added:
1) Make sure in your handler, there is an environment TABLE_NAME(or another name, you can adjust accordingly) as below, it is referring the lambda function's environment variables
const params = {
TableName: process.env.TABLE_NAME,
Item: {
...
}
}
2) update serverless.yml to nominate table name to each function.
environment:
TABLE_NAME: { "Ref": "DynamoDbExpenses" }
or
environment:
TABLE_NAME: { "Ref": "DynamoDbBudgets" }
Depend on which table the function targets.
The full serverless.yml is updated here:
service: serverless-expense-tracker
frameworkVersion: ">=1.1.0 <2.0.0"
provider:
name: aws
runtime: nodejs6.10
environment:
EXPENSES_TABLE: { "Ref": "DynamoDbExpenses" } #DynamoDbExpenses is a logicalID also used when provisioning below
BUDGETS_TABLE: { "Ref": "DynamoDbBudgets" }
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource:
- { "Fn::GetAtt": ["DynamoDbExpenses", "Arn"] } #you will also see the logical IDs below where they are provisioned
- { "Fn::GetAtt": ["DynamoDbBudgets", "Arn"] }
functions:
create:
handler: expenseTracker/create.create
environment:
TABLE_NAME: { "Ref": "DynamoDbExpenses" }
events:
- http:
path: expenses
method: post
cors: true
createBudget:
handler: expenseTracker/createBudget.createBudget
environment:
TABLE_NAME: { "Ref": "DynamoDbBudgets" }
events:
- http:
path: budgets
method: post
cors: true
list:
handler: expenseTracker/list.list
environment:
TABLE_NAME: { "Ref": "DynamoDbExpenses" }
events:
- http:
path: expenses
method: get
cors: true
listBudgets:
handler: expenseTracker/listBudgets.listBudgets
environment:
TABLE_NAME: { "Ref": "DynamoDbBudgets" }
events:
- http:
path: budgets
method: get
cors: true
get:
handler: expenseTracker/get.get
environment:
TABLE_NAME: { "Ref": "DynamoDbExpenses" }
events:
- http:
path: expenses/{id}
method: get
cors: true
update:
handler: expenseTracker/update.update
environment:
TABLE_NAME: { "Ref": "DynamoDbExpenses" }
events:
- http:
path: expenses/{id}
method: put
cors: true
delete:
handler: expenseTracker/delete.delete
environment:
TABLE_NAME: { "Ref": "DynamoDbExpenses" }
events:
- http:
path: expenses/{id}
method: delete
cors: true
resources:
Resources:
DynamoDbExpenses: #this is where the logicalID is defined
Type: 'AWS::DynamoDB::Table'
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
-
AttributeName: id
AttributeType: S
KeySchema:
-
AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: ${self:service}-${opt:stage, self:provider.stage}-expenses
DynamoDbBudgets: #here too
Type: 'AWS::DynamoDB::Table'
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
-
AttributeName: id
AttributeType: S
KeySchema:
-
AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: ${self:service}-${opt:stage, self:provider.stage}-budgets
Refer:
serverless environment variables
If your intention is to provide access to all tables in the stack that is being deployed, you can use:
Resource: !Sub arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${AWS::StackName}-*
This way the lambdas in your stack are limited to tables in your stack, and you won't have to update this every time you add a table.