I'm learning the use of the Serverless framework to create and manage AWS services. I've stepped through deploying a Serverless project with the Docs on the Serverless site, but I'm not able to see the DynamoDB tables in the AWS Management Console for some reason.
I've checked that the AWS Profile I'm using is the correct one, and I'm able to post and get data from the table when I use the cURL from terminal, and am able to view data at those endpoints in a browser, but I'm not able to see any reference to the created table anywhere outside of serverless.yml file. Why is that? Please see the code below (full demo repo at this link: https://github.com/serverless/examples/tree/master/aws-node-rest-api-with-dynamodb).
Would appreciate your help in learning the nuances here. Thanks!
org: justinbell714
app: jb-test-from-docs
service: serverless-rest-api-with-dynamodb
frameworkVersion: ">=1.1.0 <2.0.0"
provider:
name: aws
runtime: nodejs10.x
environment:
DYNAMODB_TABLE: ${self:service}-${opt:stage, self:provider.stage}
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: "arn:aws:dynamodb:${opt:region, self:provider.region}:*:table/${self:provider.environment.DYNAMODB_TABLE}"
functions:
create:
handler: todos/create.create
events:
- http:
path: todos
method: post
cors: true
list:
handler: todos/list.list
events:
- http:
path: todos
method: get
cors: true
get:
handler: todos/get.get
events:
- http:
path: todos/{id}
method: get
cors: true
update:
handler: todos/update.update
events:
- http:
path: todos/{id}
method: put
cors: true
delete:
handler: todos/delete.delete
events:
- http:
path: todos/{id}
method: delete
cors: true
resources:
Resources:
TodosDynamoDbTable:
Type: 'AWS::DynamoDB::Table'
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
-
AttributeName: id
AttributeType: S
KeySchema:
-
AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: ${self:provider.environment.DYNAMODB_TABLE}
Make sure the region this is creating the tables in is the region you have selected in the AWS console.
Related
I have two AWS Lambda functions. I have 3 stacks dev, test, and PROD.
I want a deploy a specific the Lambda function to only dev and test but not prod.
I want the trial Lambda function to be only in test and dev stages but not in PROD stage.
How can I achieve that? Here is my serverless.yml:
service:
name: demo-app
# Add the serverless-webpack plugin
plugins:
- serverless-webpack
- serverless-offline
provider:
name: aws
runtime: nodejs12.x
timeout: 30
stage: dev
region: us-west-2
profile: serverless-admin
custom:
region: ${self:provider.region}
stage: ${opt:stage, self:provider.stage}
prefix: ${self:service}-${self:custom.stage}
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
functions:
toggle:
handler: src/functions/unleash-toggle/handler.main
timeout: 900
events:
- http:
path: /toggle
method: POST
trial:
handler: src/functions/city/handler.main
timeout: 900
events:
- http:
path: /trial
method: POST
resources:
Resources:
taskTokenTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:service}-${self:custom.stage}-tokenTable
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
After doing some research I found out this can be done.
This can be done using serverless plugin: serverless-plugin-ifelse and defining your conditions under custom block.
you can see the same on Serverless Docs.
The plugin is available on npm
custom:
serverlessIfElse:
- If: '"${self:custom.stage}" == "prod"'
Exclude:
- functions.<functionName>
Complete serverless.yml file
service:
name: demo-app
# Add the serverless-webpack plugin
plugins:
- serverless-webpack
- serverless-plugin-ifelse
- serverless-offline
provider:
name: aws
runtime: nodejs12.x
timeout: 30
stage: dev
region: us-west-2
profile: serverless-admin
custom:
region: ${self:provider.region}
stage: ${opt:stage, self:provider.stage}
prefix: ${self:service}-${self:custom.stage}
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
serverlessIfElse:
- If: '"${self:custom.stage}" == "prod"'
Exclude:
- functions.trail
functions:
toggle:
handler: src/functions/unleash-toggle/handler.main
timeout: 900
events:
- http:
path: /toggle
method: POST
trial:
handler: src/functions/city/handler.main
timeout: 900
events:
- http:
path: /trial
method: POST
The same thing can be achieved by another plugin serverless-plugin-conditional-functions
I am following a serverless tutorial and I am trying to send a notification every time an image is uploaded to the s3 bucket. I've created a sendUploadNotifications function under functions and instead of adding an event to the function I've set up the "NotificationsConfiguration" under the AttachmentsBucket, as well as created a new sendUploadNotificationsPermission Resource under resources.
But when I deploy the app I get the following error when I try to deploy my serverless app:
Error: The CloudFormation template is invalid: Template error: instance of Fn::GetAtt references undefined resource sendUploadNotificationsLambdaFunction
The Error seems to stem from the way that I am referencing the FunctionName under the sendUploadNotificationsPermission resource.
I've tried different ways of referencing the function name, but to no avail. I still get the same error.
My serverless.yml file
service: serverless-udagram2
frameworkVersion: '2'
provider:
name: aws
runtime: nodejs12.x
lambdaHashingVersion: 20201221
stage: ${opt:stage, 'dev'}
region: ${opt:region, 'ap-southeast-1'}
environment:
GROUPS_TABLE: groups-${self:provider.stage}
IMAGES_TABLE: images-${self:provider.stage}
IMAGE_ID_INDEX: ImageIdIndex
IMAGES_S3_BUCKET: branded-serverless-udagram-images-${self:provider.stage}
SIGNED_URL_EXPIRATION: 300
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Scan
- dynamodb:PutItem
- dynamodb:GetItem
- dynamodb:Query
Resource: arn:aws:dynamodb:${self:provider.region}:*:table/${self:provider.environment.GROUPS_TABLE}
- Effect: Allow
Action:
- dynamodb:PutItem
- dynamodb:Query
Resource: arn:aws:dynamodb:${self:provider.region}:*:table/${self:provider.environment.IMAGES_TABLE}
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:PutItem
Resource: arn:aws:dynamodb:${self:provider.region}:*:table/${self:provider.environment.IMAGES_TABLE}/index/${self:provider.environment.IMAGE_ID_INDEX}
- Effect: Allow
Action:
- s3:PutObject
- s3:GetObject
Resource: arn:aws:s3:::${self:provider.environment.IMAGES_S3_BUCKET}/*
functions:
getGroups:
handler: src/lambda/http/getGroups.handler
events:
- http:
path: groups
method: get
cors: true
createGroup:
handler: src/lambda/http/createGroup.handler
events:
- http:
path: groups
method: post
cors: true
request:
schema:
application/json: ${file(models/create-group-request.json)}
getImages:
handler: src/lambda/http/getImages.handler
events:
- http:
path: groups/{groupId}/images
method: get
cors: true
getImage:
handler: src/lambda/http/getImage.handler
events:
- http:
path: images/{imageId}
method: get
cors: true
createImage:
handler: src/lambda/http/createImage.handler
events:
- http:
path: groups/{groupId}/images
method: post
cors: true
request:
schema:
application/json: ${file(models/create-image-request.json)}
sendUploadNotifications:
handler: src/lambda/s3/sendNotifications.handler
resources:
Resources:
# API gateway validates the request in accordance with json schemas that are identified in the function section under schema
RequestBodyValidator:
Type: AWS::ApiGateway::RequestValidator
Properties:
Name: 'request-body-validator'
RestApiId:
Ref: ApiGatewayRestApi
ValidateRequestBody: true
ValidateRequestParameters: true
GroupsDynamoDBTable:
Type: AWS::DynamoDB::Table
Properties:
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
BillingMode: PAY_PER_REQUEST
TableName: ${self:provider.environment.GROUPS_TABLE}
ImagesDynamoDBTable:
Type: AWS::DynamoDB::Table
Properties:
AttributeDefinitions:
- AttributeName: groupId
AttributeType: S
- AttributeName: timestamp
AttributeType: S
- AttributeName: imageId
AttributeType: S
KeySchema:
- AttributeName: groupId
KeyType: HASH #partition key
- AttributeName: timestamp
KeyType: RANGE #sort key
GlobalSecondaryIndexes:
- IndexName: ${self:provider.environment.IMAGE_ID_INDEX}
KeySchema:
- AttributeName: imageId
KeyType: HASH
Projection:
ProjectionType: ALL
BillingMode: PAY_PER_REQUEST
TableName: ${self:provider.environment.IMAGES_TABLE}
# Bucket for file uploads
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:provider.environment.IMAGES_S3_BUCKET}
NotificationConfiguration: # Sends notification when image has been uploaded
LambdaConfigurations: #
- Event: s3:ObjectCreated:*
Function: !GetAtt sendUploadNotificationsLambdaFunction.Arn
CorsConfiguration:
CorsRules:
-
AllowedOrigins:
- "*"
AllowedHeaders:
- "*"
AllowedMethods:
- 'GET'
- 'PUT'
- 'POST'
- 'DELETE'
- 'HEAD'
MaxAge: 3000
sendUploadNotificationsPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !GetAtt sendUploadNotificationsLambdaFunction.Arn
Action: lambda:InvokeFunction
Principal: s3.amazonaws.com
SourceAccount: !Ref AWS::AccountId #!Ref
SourceArn: arn:aws:s3:::${self:provider.environment.IMAGES_S3_BUCKET}
BucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
PolicyDocument:
Id: MyPolicy
Version: "2012-10-17"
Statement:
- Sid: PublicReadForGetBucketObjects
Effect: Allow
Principal: '*'
Action: 's3:GetObject'
Resource: 'arn:aws:s3:::${self:provider.environment.IMAGES_S3_BUCKET}/*'
Bucket:
Ref: AttachmentsBucket
I've tried changing the name of the function in both the sendUploadNotificationsPermission and the AttachmentsBucket by appending LamdaFunction to the end of the function name, but still getting the same error.
Any help with this error would be appreciated.
You are trying to reference something which doesn't exist in the template at in the CloudFormation section Resource.
sendUploadNotificationsLambdaFunction
In case you want to reference any of the function you have defined named
sendUploadNotifications
you need to construct the ARN inside the Resources section.
To generate Logical ID for CloudFormation, the plugin transform the specified name in serverless.yml based on the following scheme.
Transform a leading character into uppercase
Transform - into Dash
Transform _ into Underscore
SendUploadNotificationsLambdaFunction in your case.
There are now two ways:
You reference this inside Resource section of the template:
sendUploadNotificationsPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !GetAtt SendUploadNotificationsLambdaFunction.Arn
Action: lambda:InvokeFunction
Principal: s3.amazonaws.com
SourceAccount: !Ref AWS::AccountId #!Ref
SourceArn: arn:aws:s3:::${self:provider.environment.IMAGES_S3_BUCKET}
You construct the ARN using Fn::Join.
sendUploadNotificationsPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !Join [":", ['arn:aws:lambda', !Ref 'AWS::Region', !Ref AWS::AccountId, '${self:service}-${self:provider.stage}-sendUploadNotifications']]
Action: lambda:InvokeFunction
Principal: s3.amazonaws.com
SourceAccount: !Ref AWS::AccountId #!Ref
SourceArn: arn:aws:s3:::${self:provider.environment.IMAGES_S3_BUCKET}
Solved it. The 'references undefined resource error' was caused by the fact that after the serverless.yml file compiles it capitalizes the function name.
sendUploadNotifications becomes SendUploadNotificationsLambdaFunction
changed:
FunctionName: !Ref sendUploadNotificationsLambdaFunction
to:
FunctionName: !Ref SendUploadNotificationsLambdaFunction
It now deploys without an issue.
I am currently using the serverless-iam-roles-per-function plugin to give each of my lambda functions their own IAM roles. But when I deploy it, it seems like it still creates a default lambdaRole that contains all the functions. I did not define iamRoleStatements or VPC in the provider section of the serverless.yml. Am I missing something? I would like to only have roles per function. Any feedback would be appreciated.
Snippet of yml:
provider:
name: aws
runtime: go1.x
stage: ${env:SLS_STAGE}
region: ${env:SLS_REGION}
environment:
DB_HOSTS: ${env:SLS_DB_HOSTS}
DB_NAME: ${env:SLS_DB_NAME}
DB_USERNAME: ${env:SLS_DB_USERNAME}
DB_PASSWORD: ${env:SLS_DB_PASSWORD}
TYPE: ${env:SLS_ENV_TYPE}
functions:
function1:
package:
exclude:
- ./**
include:
- ./bin/function_1
handler: bin/function_1
vpc: ${self:custom.vpc}
iamRoleStatements: ${self:custom.iamRoleStatements}
events:
- http:
path: products
method: get
private: true
cors: true
authorizer: ${self:custom.authorizer.function_1}
custom:
vpc:
securityGroupIds:
- sg-00000
subnetIds:
- subnet-00001
- subnet-00002
- subnet-00003
iamRoleStatements:
- Effect: Allow
Action:
- lambda:InvokeFunction
- ssm:GetParameter
- ssm:GetParametersByPath
- ssm:PutParameter
Resource: "*"
Facing Syntax IamRoleLambdaExecution - Syntax errors in policy. (Service: AmazonIdentityManagement; Status Code: 400; Error Code: MalformedPolicyDocument; Request ID: ********-****-****-****-************).
for the below serverless.yml file.
plugins:
- serverless-pseudo-parameters
provider:
name: aws
runtime: nodejs8.10
iamRoleStatements:
- Effect: Allow
Action:
- "dynamodb:PutItem"
- "dynamodb:GetItem"
Resource:
- arn:aws:dynamodb:#{AWS::Region}:#{AWS::AccountId}:table/ordersTable
- Effect: Allow
Action:
- kinesis: "PutRecord"
Resource:
- arn:aws:kinesis:#{AWS::Region}:#{AWS::AccountId}:stream/order-events
functions:
createOrder:
handler: handler.createOrder
events:
- http:
path: /order
method: post
environment:
orderTableName: ordersTable
orderStreamName: order-events
resources:
Resources:
orderEventsStream:
Type: AWS::Kinesis::Stream
Properties:
Name: order-events
ShardCount: 1
orderTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ordersTable
AttributeDefinitions:
- AttributeName: "orderId"
AttributeType: "S"
KeySchema:
- AttributeName: "orderId"
KeyType: "HASH"
BillingMode: PAY_PER_REQUEST```
serverless details:
- Framework Core: 1.71.3
- Plugin: 3.6.12
- SDK: 2.3.0
- Components: 2.30.11
Based on OP's feedback in the comment, changing kinesis: "PutRecord" to "kinesis: PutRecord" should work.
I'm attempting to create an S3 bucket with serverless, which works, however in order to manipulate files in it I need a bucket policy. I'm having a hard time understanding where and how to add a policy that uses the generated S3bucket name created when serverless deploys for the first time
##serverless.yml##
service: vcc-nametags-api
# Use the serverless-webpack plugin to transpile ES6
plugins:
- serverless-webpack
- serverless-offline
- serverless-ding
# serverless-webpack configuration
# Enable auto-packing of external modules
custom:
# Our stage is based on what is passed in when running serverless
# commands. Or fallsback to what we have set in the provider section.
stage: ${opt:stage, self:provider.stage}
# Set our DynamoDB throughput for prod and all other non-prod stages.
# Load our webpack config
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
environment: ${file(env.yml):${self:custom.stage}, file(env.yml):default}
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-east-1
# These environment variables are made available to our functions
# under process.env.
environment:
S3DBBucketName:
Ref: NametagsDatabaseBucket
functions:
# Defines an HTTP API endpoint that calls the main function in create.js
# - path: url path is /tags
# - method: POST request
# - cors: enabled CORS (Cross-Origin Resource Sharing) for browser cross
# domain api call
# - authorizer: authenticate using the AWS IAM role
create:
handler: create.main
events:
- http:
path: tags
method: post
cors: true
get:
# Defines an HTTP API endpoint that calls the main function in get.js
# - path: url path is /tags/{id}
# - method: GET request
handler: get.main
events:
- http:
path: tags/{id}
method: get
cors: true
list:
# Defines an HTTP API endpoint that calls the main function in list.js
# - path: url path is /tags
# - method: GET request
handler: list.main
events:
- http:
path: tags
method: get
cors: true
update:
# Defines an HTTP API endpoint that calls the main function in update.js
# - path: url path is /tags/{id}
# - method: PUT request
handler: update.main
events:
- http:
path: tags/{id}
method: put
cors: true
delete:
# Defines an HTTP API endpoint that calls the main function in delete.js
# - path: url path is /tags/{id}
# - method: DELETE request
handler: delete.main
events:
- http:
path: tags/{id}
method: delete
cors: true
# Create our resources with separate CloudFormation templates
resources:
# S3DB
- ${file(resources/s3-database.yml)}
##s3-database.yml##
Resources:
NametagsDatabaseBucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
-
AllowedOrigins:
- '*'
AllowedHeaders:
- '*'
AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
MaxAge: 3000
NametagsDatabaseBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket:
Ref: NametagsDatabaseBucket
PolicyDocument:
Statement:
- Sid: PublicReadGetObject
Effect: Allow
Principal: "*"
Action:
- "s3:DeleteObject"
- "s3:GetObject"
- "s3:ListBucket"
- "s3:PutObject"
Resource:
Fn::Join: [
"", [
"arn:aws:s3:::",
{
"Ref": "NametagsDatabaseBucket"
},
"/*"
]
]
# Print out the name of the bucket that is created
Outputs:
NametagsDatabaseBucketName:
Value:
Ref: NametagsDatabaseBucket
I've tried various combinations I've found on the internet as well as adding it to an iamroles property in the serverless.yml file but I can't seem to get anything to work
The Resource Reference Name seems to matter, I have always had to use the name of the bucket in the resource name. For example, a bucket with www.example.com needs a reference name of S3BucketWwwexamplecom.
However I also notice that the BucketName element is missing from your example.
This is from working example for a static website with a Bucket Policy:
resources:
Resources:
S3BucketWwwexamplecom:
Type: AWS::S3::Bucket
DeletionPolicy: Delete
Properties:
BucketName: ${self:custom.s3WwwBucket}
CorsConfiguration:
CorsRules:
- AllowedMethods:
- PUT
- GET
- POST
- HEAD
AllowedOrigins:
- "https://${self:custom.myDomain}"
AllowedHeaders:
- "*"
AccessControl: PublicRead
WebsiteConfiguration:
IndexDocument: index.html
BucketPolicyWwwexamplecom:
Type: 'AWS::S3::BucketPolicy'
Properties:
PolicyDocument:
Statement:
- Sid: PublicReadForGetBucketObjects
Effect: Allow
Principal: '*'
Action:
- 's3:GetObject'
Resource: arn:aws:s3:::${self:custom.s3WwwBucket}/*
Bucket:
Ref: S3BucketWwwexamplecom
Since you are using a lambda to upload you should create an IAM Role for your Lambda and an IAM Policy with only the permissions required for operation. You might accomplish this by using the following excerpt in your cloud formation:
AWSTemplateFormatVersion: '2010-09-09'
Description: My Template
Resources:
LambdaRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
RoleName: !Sub ${AWS::StackName}-LambdaRole
S3Policy:
Type: AWS::IAM::Policy
Properties:
PolicyName: S3_Writer
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- s3:*
Resource: !Sub
- arn:aws:s3:::${BucketName}/*
- BucketName: !Ref NametagsDatabaseBucket
Roles:
- !Ref TaskRole
Outputs:
LambdaRole:
Value: !Sub "${LambdaRole.Arn}"
Export:
Name: !Sub ${AWS::StackName}-LambdaRole
Then in your serverless.yml just refer to the task role created using something like this to reference the execution role:
service: vcc-nametags-api
provider:
role: ${cf:${env:YOUR_STACK_ENV, 'YOUR_STACK_NAME'}.LambdaRole}
We have a setup like this working in several projects, I hope it works for you.