I'm using AWS Backup services to create backups to my DynamoDB, but I don't like that solution because it's very manually and not replicable.
Now, How can I build a AWS Backup (from CloudFormation Designer or template)?
I'm searching about it but I cant do that.
Note: I don't want make the backup using any schedule event with lambda. I need use the AWS Backup but where can I have a CloudFormation Template for easy Creation / Update.
Description: "Backup Plan template to back up all resources tagged with
backup=daily daily at 5am UTC."
Resources:
KMSKey:
Type: AWS::KMS::Key
Properties:
Description: "Encryption key for daily"
EnableKeyRotation: True
Enabled: True
KeyPolicy:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
"AWS": { "Fn::Sub": "arn:aws:iam::***********:root" }
# "AWS": 'arn:aws:iam::***********:root'
Action:
- kms:*
Resource: "*"
BackupVaultWithDailyBackups:
Type: "AWS::Backup::BackupVault"
Properties:
BackupVaultName: "BackupVaultWithDailyBackups"
EncryptionKeyArn: { "Fn::GetAtt": [ KMSKey, Arn ] } #${self:custom.keyArn}
BackupPlanWithDailyBackups:
Type: "AWS::Backup::BackupPlan"
Properties:
BackupPlan:
BackupPlanName: "BackupPlanWithDailyBackups"
BackupPlanRule:
-
RuleName: DailyBackups
ScheduleExpression: cron(0 5 ? * * *)
StartWindowMinutes: 480
TargetBackupVault: {Ref: BackupVaultWithDailyBackups}
Lifecycle:
DeleteAfterDays: 35
-
RuleName: WeeklyBackups
ScheduleExpression: cron(0 5 ? * 7 *)
TargetBackupVault: {Ref: BackupVaultWithDailyBackups}
StartWindowMinutes: 480
Lifecycle:
DeleteAfterDays: 90
-
RuleName: MonthlyBackups
ScheduleExpression: cron(0 5 1 * ? *)
TargetBackupVault: {Ref: BackupVaultWithDailyBackups}
StartWindowMinutes: 480
Lifecycle:
MoveToColdStorageAfterDays: 90
DeleteAfterDays: 1825
DependsOn: BackupVaultWithDailyBackups
# BackupRole:
# Type: "AWS::IAM::Role"
# Properties:
# AssumeRolePolicyDocument:
# Version: "2012-10-17"
# Statement:
# -
# Effect: "Allow"
# Principal:
# Service:
# - "backup.amazonaws.com"
# Action:
# - "sts:AssumeRole"
# ManagedPolicyArns:
# -
# "arn:aws:iam::**********:role/service-role/AWSBackupDefaultServiceRole"
TagBasedBackupSelection:
Type: "AWS::Backup::BackupSelection"
Properties:
BackupSelection:
SelectionName: "TagBasedBackupSelection"
IamRoleArn: "arn:aws:iam::***********:role/service-role/AWSBackupDefaultServiceRole"
ListOfTags:
-
ConditionType: "STRINGEQUALS"
ConditionKey: "backup"
ConditionValue: "dev-pci"
-
ConditionType: "STRINGEQUALS"
ConditionKey: "backup"
ConditionValue: "uat-pci"
-
ConditionType: "STRINGEQUALS"
ConditionKey: "backup"
ConditionValue: "prod-pci"
BackupPlanId: {Ref: BackupPlanWithDailyBackups}
DependsOn: BackupPlanWithDailyBackups
Note: Replace *********** for your AWS AccountId
You need add the dynamoDB tag like:
DDBTableWithDailyBackupTag:
Type: "AWS::DynamoDB::Table"
Properties:
TableName: "TestTable"
AttributeDefinitions:
-
AttributeName: "Album"
AttributeType: "S"
KeySchema:
-
AttributeName: "Album"
KeyType: "HASH"
ProvisionedThroughput:
ReadCapacityUnits: "5"
WriteCapacityUnits: "5"
Tags:
-
Key: "backup"
Value: "daily"
Description: "Backup Plan template to back up all resources tagged with backup=daily daily at 5am UTC."
Resources:
KMSKey:
Type: AWS::KMS::Key
Properties:
Description: "Encryption key for daily"
EnableKeyRotation: True
Enabled: True
KeyPolicy:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
"AWS": { "Fn::Sub": "arn:${AWS::Partition}:iam::${AWS::AccountId}:root" }
Action:
- kms:*
Resource: "*"
BackupVaultWithDailyBackups:
Type: "AWS::Backup::BackupVault"
Properties:
BackupVaultName: "BackupVaultWithDailyBackups"
EncryptionKeyArn: !GetAtt KMSKey.Arn
BackupPlanWithDailyBackups:
Type: "AWS::Backup::BackupPlan"
Properties:
BackupPlan:
BackupPlanName: "BackupPlanWithDailyBackups"
BackupPlanRule:
-
RuleName: "RuleForDailyBackups"
TargetBackupVault: !Ref BackupVaultWithDailyBackups
ScheduleExpression: "cron(0 5 ? * * *)"
DependsOn: BackupVaultWithDailyBackups
DDBTableWithDailyBackupTag:
Type: "AWS::DynamoDB::Table"
Properties:
TableName: "TestTable"
AttributeDefinitions:
-
AttributeName: "Album"
AttributeType: "S"
KeySchema:
-
AttributeName: "Album"
KeyType: "HASH"
ProvisionedThroughput:
ReadCapacityUnits: "5"
WriteCapacityUnits: "5"
Tags:
-
Key: "backup"
Value: "daily"
BackupRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Principal:
Service:
- "backup.amazonaws.com"
Action:
- "sts:AssumeRole"
ManagedPolicyArns:
-
"arn:aws:iam::aws:policy/service-role/service role"
TagBasedBackupSelection:
Type: "AWS::Backup::BackupSelection"
Properties:
BackupSelection:
SelectionName: "TagBasedBackupSelection"
IamRoleArn: !GetAtt BackupRole.Arn
ListOfTags:
-
ConditionType: "STRINGEQUALS"
ConditionKey: "backup"
ConditionValue: "daily"
BackupPlanId: !Ref BackupPlanWithDailyBackups
DependsOn: BackupPlanWithDailyBackups
Reference:
https://docs.aws.amazon.com/aws-backup/latest/devguide/integrate-cloudformation-with-aws-backup.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/AWS_Backup.html
Related
I have the below config to read data from S3 written there by Kinesis Firehose:
S3AthenaStore:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.s3AthenaStore}
AnalysisGlueDatabase:
Type: AWS::Glue::Database
Properties:
CatalogId: !Ref AWS::AccountId
DatabaseInput:
Name: !Join
- ''
- - '${self:custom.glueName}-'
- 'db'
Description: "Analysis aws Glue database"
DependsOn:
- S3AthenaStore
AnalyticsGlueRole:
Type: AWS::IAM::Role
DependsOn:
- S3AnalyticsStore
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Principal:
Service:
- "glue.amazonaws.com"
Action:
- "sts:AssumeRole"
Path: "/"
ManagedPolicyArns:
['arn:aws:iam::aws:policy/service-role/AWSGlueServiceRole']
Policies:
-
PolicyName: "S3BucketAccessPolicy"
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action:
- "s3:GetObject"
- "s3:PutObject"
Resource:
- !Join
- ''
- - !GetAtt S3AnalyticsStore.Arn
- "*"
AnalyticsGlueCrawler:
Type: AWS::Glue::Crawler
Properties:
Name: "AnalysisCrawler"
Role: !GetAtt AnalyticsGlueRole.Arn
DatabaseName: !Ref AnalysisGlueDatabase
Targets:
S3Targets:
- Path: !Ref S3AnalyticsStore
SchemaChangePolicy:
UpdateBehavior: "LOG"
DeleteBehavior: "LOG"
Schedule:
ScheduleExpression: "cron(00 0/1 * * ? *)"
RecrawlPolicy:
RecrawlBehavior: CRAWL_NEW_FOLDERS_ONLY
DependsOn:
- AnalyticsGlueRole
- AnalysisGlueDatabase
AnalyticsAthenaWorkGroup:
Type: AWS::Athena::WorkGroup
Properties:
Name: ${self:service}-${self:provider.stage}-wg
WorkGroupConfiguration:
ResultConfiguration:
OutputLocation:
!Join
- ''
- - 's3://'
- !Ref S3AthenaStore
DependsOn:
- S3AthenaStore
The data is the folders with the following pattern: ${bucket}/${year}/${month}/${date}/${hour}/event-collection-stream-staging-deliver-1-2022-07-14-23-51-22-cdb2f06a-e825-47d0-a781-efd4195ab88d.gz and it looks like:
{"anonymous_id":"123","url":"-","event_type":"pageView","timestamp":"2022-07-12T03:29:47.186Z","source_ip":"69.113.177.222","user_agent":"curl/7.54.0"} {"anonymous_id":"123","url":"-","event_type":"pageView","timestamp":"2022-07-12T03:29:50.726Z","source_ip":"69.113.177.222","user_agent":"curl/7.54.0"} {"anonymous_id":"123","url":"-","event_type":"pageView","timestamp":"2022-07-12T03:29:53.628Z","source_ip":"69.113.177.222","user_agent":"curl/7.54.0"}
My question is - how come my data is automatically partitioned in Athena? When I run: select * from page_view_store_staging, it returns: my columns, plus columns for four (4) partitions 0-3 with column partition_0 having a value of 2022 etc.
I did not specify this in my config, did I?
I configured my AWS stack using AWS CloudFormation as follow
Resources:
DynamoDBTable:
Type: AWS::DynamoDB::Table
Properties:
AttributeDefinitions:
-
AttributeName: "PK"
AttributeType: "S"
-
AttributeName: "SK"
AttributeType: "S"
-
AttributeName: "GSI1_PK"
AttributeType: "S"
-
AttributeName: "GSI2_PK"
AttributeType: "S"
-
AttributeName: "GSI1_SK"
AttributeType: "S"
-
AttributeName: "GSI2_SK"
AttributeType: "S"
KeySchema:
-
AttributeName: "PK"
KeyType: "HASH"
-
AttributeName: "SK"
KeyType: "RANGE"
GlobalSecondaryIndexes:
-
IndexName: "line_tickets"
KeySchema:
-
AttributeName: "GSI1_PK"
KeyType: "HASH"
-
AttributeName: "GSI1_SK"
KeyType: "RANGE"
Projection:
ProjectionType: "ALL"
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
-
IndexName: "users_tickets"
KeySchema:
-
AttributeName: "GSI2_PK"
KeyType: "HASH"
-
AttributeName: "GSI2_SK"
KeyType: "RANGE"
Projection:
ProjectionType: "ALL"
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
TableName: !Ref TableName
Tags:
- Key: ENV
Value: !Ref ENVName
DependsOn:
- "LambdaExecutionRole"
GetAccountLambdaFun:
Type: AWS::Lambda::Function
Properties:
FunctionName: GetAccountLambdaFun
Description: Retrive account From DB.
Role: !GetAtt LambdaExecutionRole.Arn
Handler: index.handler
MemorySize: 128
Runtime: !Ref LambdaRuntime
Environment:
Variables:
DB_END_POINT: !Ref LambdaDBEndPoint
TABLE_NAME: !Ref TableName
DB_API_VERSION: !Ref LambdaDBApiVersion
DB_AWS_REGION: !Ref "AWS::Region"
Code:
S3Bucket: !Ref LambdaHostS3BucketName
S3Key: !Sub
- "${Folder}/account-get-lambda-fun.zip"
- Folder: !Ref ENVName
Tags:
- Key: ENV
Value: !Ref ENVName
DependsOn:
- "DynamoDBTable"
LambdaExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- 'sts:AssumeRole'
RoleName: **LambdaExecutionRole**
Path: /
ManagedPolicyArns:
- !Ref LambdaLogGroupPolicy
- !Ref DynamoFullAccessPolicy
LambdaLogGroupPolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
Description: Lambda log groupt policy
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- 'logs:CreateLogGroup'
Resource: !Sub arn:aws:logs:${AWS::Region}:${AWS::AccountId}:*
- Effect: Allow
Action:
- 'logs:CreateLogStream'
- 'logs:PutLogEvents'
Resource: !Sub arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/lambda/*:*
DynamoFullAccessPolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
Description:dynamo full access policy
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Sid: "DynamoDBIndexAndStreamAccess"
Action:
- 'dynamodb:GetShardIterator'
- 'dynamodb:Scan'
- 'dynamodb:Query'
- 'dynamodb:DescribeStream'
- 'dynamodb:GetRecords'
- 'dynamodb:ListStream'
Resource:
- !Sub
- arn:aws:dynamodb:*:*:table/*/index/*
- tableName: !Ref TableName
- !Sub
- arn:aws:dynamodb:*:*:table/*/stream/*
- tableName: !Ref TableName
- Effect: Allow
Sid: "DynamoDBTableAccess"
Action:
- dynamodb:BatchGetItem'
- dynamodb:BatchWriteItem'
- dynamodb:ConditionCheckItem'
- dynamodb:PutItem'
- dynamodb:DescribeTable'
- dynamodb:DeleteItem'
- dynamodb:GetItem'
- dynamodb:Scan'
- dynamodb:Query'
- dynamodb:UpdateIte'
Resource:
- !Sub
- arn:aws:dynamodb:*:*:table/*
- tableName: !Ref TableName
- Effect: Allow
Sid: "DynamoDBDescribeLimitsAccess"
Action: 'dynamodb:DescribeLimits'
Resource:
- !Sub
- arn:aws:dynamodb:*:*:table/*
- tableName: !Ref TableName
- !Sub
- arn:aws:dynamodb:*:*:table/*/index/*
- tableName: !Ref TableName
when i try to test the lambda function via AWS console i get an error:
ERROR DynamoDB GET REQUEST failed AccessDeniedException: User:
arn:aws:sts::XXXX:assumed-role/LambdaExecutionRole/GetAccountLambdaFun
is not authorized to perform: dynamodb:Query on resource:
arn:aws:dynamodb:eu-central-1:XXXX:table/TableName
I am not sure why is that as i configure a DynamoFullAccessPolicy that should grant the lambda access to this DB.
I am following a serverless tutorial and I am trying to send a notification every time an image is uploaded to the s3 bucket. I've created a sendUploadNotifications function under functions and instead of adding an event to the function I've set up the "NotificationsConfiguration" under the AttachmentsBucket, as well as created a new sendUploadNotificationsPermission Resource under resources.
But when I deploy the app I get the following error when I try to deploy my serverless app:
Error: The CloudFormation template is invalid: Template error: instance of Fn::GetAtt references undefined resource sendUploadNotificationsLambdaFunction
The Error seems to stem from the way that I am referencing the FunctionName under the sendUploadNotificationsPermission resource.
I've tried different ways of referencing the function name, but to no avail. I still get the same error.
My serverless.yml file
service: serverless-udagram2
frameworkVersion: '2'
provider:
name: aws
runtime: nodejs12.x
lambdaHashingVersion: 20201221
stage: ${opt:stage, 'dev'}
region: ${opt:region, 'ap-southeast-1'}
environment:
GROUPS_TABLE: groups-${self:provider.stage}
IMAGES_TABLE: images-${self:provider.stage}
IMAGE_ID_INDEX: ImageIdIndex
IMAGES_S3_BUCKET: branded-serverless-udagram-images-${self:provider.stage}
SIGNED_URL_EXPIRATION: 300
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Scan
- dynamodb:PutItem
- dynamodb:GetItem
- dynamodb:Query
Resource: arn:aws:dynamodb:${self:provider.region}:*:table/${self:provider.environment.GROUPS_TABLE}
- Effect: Allow
Action:
- dynamodb:PutItem
- dynamodb:Query
Resource: arn:aws:dynamodb:${self:provider.region}:*:table/${self:provider.environment.IMAGES_TABLE}
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:PutItem
Resource: arn:aws:dynamodb:${self:provider.region}:*:table/${self:provider.environment.IMAGES_TABLE}/index/${self:provider.environment.IMAGE_ID_INDEX}
- Effect: Allow
Action:
- s3:PutObject
- s3:GetObject
Resource: arn:aws:s3:::${self:provider.environment.IMAGES_S3_BUCKET}/*
functions:
getGroups:
handler: src/lambda/http/getGroups.handler
events:
- http:
path: groups
method: get
cors: true
createGroup:
handler: src/lambda/http/createGroup.handler
events:
- http:
path: groups
method: post
cors: true
request:
schema:
application/json: ${file(models/create-group-request.json)}
getImages:
handler: src/lambda/http/getImages.handler
events:
- http:
path: groups/{groupId}/images
method: get
cors: true
getImage:
handler: src/lambda/http/getImage.handler
events:
- http:
path: images/{imageId}
method: get
cors: true
createImage:
handler: src/lambda/http/createImage.handler
events:
- http:
path: groups/{groupId}/images
method: post
cors: true
request:
schema:
application/json: ${file(models/create-image-request.json)}
sendUploadNotifications:
handler: src/lambda/s3/sendNotifications.handler
resources:
Resources:
# API gateway validates the request in accordance with json schemas that are identified in the function section under schema
RequestBodyValidator:
Type: AWS::ApiGateway::RequestValidator
Properties:
Name: 'request-body-validator'
RestApiId:
Ref: ApiGatewayRestApi
ValidateRequestBody: true
ValidateRequestParameters: true
GroupsDynamoDBTable:
Type: AWS::DynamoDB::Table
Properties:
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
BillingMode: PAY_PER_REQUEST
TableName: ${self:provider.environment.GROUPS_TABLE}
ImagesDynamoDBTable:
Type: AWS::DynamoDB::Table
Properties:
AttributeDefinitions:
- AttributeName: groupId
AttributeType: S
- AttributeName: timestamp
AttributeType: S
- AttributeName: imageId
AttributeType: S
KeySchema:
- AttributeName: groupId
KeyType: HASH #partition key
- AttributeName: timestamp
KeyType: RANGE #sort key
GlobalSecondaryIndexes:
- IndexName: ${self:provider.environment.IMAGE_ID_INDEX}
KeySchema:
- AttributeName: imageId
KeyType: HASH
Projection:
ProjectionType: ALL
BillingMode: PAY_PER_REQUEST
TableName: ${self:provider.environment.IMAGES_TABLE}
# Bucket for file uploads
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:provider.environment.IMAGES_S3_BUCKET}
NotificationConfiguration: # Sends notification when image has been uploaded
LambdaConfigurations: #
- Event: s3:ObjectCreated:*
Function: !GetAtt sendUploadNotificationsLambdaFunction.Arn
CorsConfiguration:
CorsRules:
-
AllowedOrigins:
- "*"
AllowedHeaders:
- "*"
AllowedMethods:
- 'GET'
- 'PUT'
- 'POST'
- 'DELETE'
- 'HEAD'
MaxAge: 3000
sendUploadNotificationsPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !GetAtt sendUploadNotificationsLambdaFunction.Arn
Action: lambda:InvokeFunction
Principal: s3.amazonaws.com
SourceAccount: !Ref AWS::AccountId #!Ref
SourceArn: arn:aws:s3:::${self:provider.environment.IMAGES_S3_BUCKET}
BucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
PolicyDocument:
Id: MyPolicy
Version: "2012-10-17"
Statement:
- Sid: PublicReadForGetBucketObjects
Effect: Allow
Principal: '*'
Action: 's3:GetObject'
Resource: 'arn:aws:s3:::${self:provider.environment.IMAGES_S3_BUCKET}/*'
Bucket:
Ref: AttachmentsBucket
I've tried changing the name of the function in both the sendUploadNotificationsPermission and the AttachmentsBucket by appending LamdaFunction to the end of the function name, but still getting the same error.
Any help with this error would be appreciated.
You are trying to reference something which doesn't exist in the template at in the CloudFormation section Resource.
sendUploadNotificationsLambdaFunction
In case you want to reference any of the function you have defined named
sendUploadNotifications
you need to construct the ARN inside the Resources section.
To generate Logical ID for CloudFormation, the plugin transform the specified name in serverless.yml based on the following scheme.
Transform a leading character into uppercase
Transform - into Dash
Transform _ into Underscore
SendUploadNotificationsLambdaFunction in your case.
There are now two ways:
You reference this inside Resource section of the template:
sendUploadNotificationsPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !GetAtt SendUploadNotificationsLambdaFunction.Arn
Action: lambda:InvokeFunction
Principal: s3.amazonaws.com
SourceAccount: !Ref AWS::AccountId #!Ref
SourceArn: arn:aws:s3:::${self:provider.environment.IMAGES_S3_BUCKET}
You construct the ARN using Fn::Join.
sendUploadNotificationsPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !Join [":", ['arn:aws:lambda', !Ref 'AWS::Region', !Ref AWS::AccountId, '${self:service}-${self:provider.stage}-sendUploadNotifications']]
Action: lambda:InvokeFunction
Principal: s3.amazonaws.com
SourceAccount: !Ref AWS::AccountId #!Ref
SourceArn: arn:aws:s3:::${self:provider.environment.IMAGES_S3_BUCKET}
Solved it. The 'references undefined resource error' was caused by the fact that after the serverless.yml file compiles it capitalizes the function name.
sendUploadNotifications becomes SendUploadNotificationsLambdaFunction
changed:
FunctionName: !Ref sendUploadNotificationsLambdaFunction
to:
FunctionName: !Ref SendUploadNotificationsLambdaFunction
It now deploys without an issue.
I currently have the following cloudformation .yaml file:
Resources:
DynamoTable:
Type: "AWS::DynamoDB::Table"
Properties:
...
...
...
How do I give other resources permission to query this table?
Resources:
Service:
Type: "AWS::CloudFormation::Stack"
Properties:
Parameters:
...
...
TaskPolicyArn: !Ref ThisServicePolicy
DynamoTable:
Type: "AWS::DynamoDB::Table"
Properties:
AttributeDefinitions:
...
...
...
ThisServicePolicy:
Type: "AWS::IAM::ManagedPolicy"
Properties:
ManagedPolicyName: SomePolicyName
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
- "dynamodb:GetItem"
- "dynamodb:BatchGetItem"
- "dynamodb:Query"
Resource: "*"
The following is the template.yaml for the lambda function. I'm trying to add permissions to access the status database. However, it needs the database to exist and vice versa, and so I get a circular dependency error with DynamoDBIamPolicy. How can I resolve this?
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function.
Resources:
friendTeachers:
Type: 'AWS::Serverless::Function'
Properties:
Handler: friendTeachers/index.handler
Runtime: nodejs6.10
Description: ''
MemorySize: 128
Timeout: 15
status:
Type: 'AWS::DynamoDB::Table'
Properties:
TableName: status
AttributeDefinitions:
- AttributeName: screenName
AttributeType: S
KeySchema:
- AttributeName: screenName
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
# A policy is a resource that states one or more permssions. It lists actions, resources and effects.
DynamoDBIamPolicy:
Type: 'AWS::IAM::Policy'
DependsOn: status
Properties:
PolicyName: lambda-dynamodb
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
- dynamodb:batchWriteItem
Resource: arn:aws:dynamodb:*:*:table/status
Roles:
- Ref: IamRoleLambdaExecution
You are missing a role where you specify that the lambda service can AssumeRole. The role needs to have a policy associated that specifies the operations that can be done in the DynamoDb table. Find below an example that shows what you are trying to accomplish:
---
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function.
Resources:
friendTeachersFunction:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket:
Ref: LambdaCodeBucket
S3Key:
Ref: LambdaCodePath
Handler: friendTeachers/index.handler
Runtime: "nodejs6.10"
Description: ''
MemorySize: 128
Timeout: 15
Role:
Fn::GetAtt:
- friendTeachersExecutionRole
- Arn
friendTeachersExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- sts:AssumeRole
Policies:
- PolicyName: UseDBPolicy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
- dynamodb:batchWriteItem
Resource: arn:aws:dynamodb:*:*:table/status
APIDynamoDBTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: status
AttributeDefinitions:
- AttributeName: screenName
AttributeType: S
KeySchema:
- AttributeName: screenName
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
Note that Code.S3Bucket and Code.S3Key are defined as parameters. When you create the stack in AWS Console you can specify the path there.