When I attempt to build the following:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Foobar
Resources:
FailuresTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: Failures
AttributeDefinitions:
-
AttributeName: failureKey
AttributeType: S
-
AttributeName: status,
AttributeType: S
KeySchema:
-
AttributeName: failureKey
KeyType: HASH
GlobalSecondaryIndexes:
-
IndexName: failure-status
KeySchema:
- AttributeName: status
KeyType: RANGE
Projection:
ProjectionType: ALL
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 15
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 15
I get an error, "Property AttributeDefinitions is inconsistent with the KeySchema of the table and the secondary indexes".
I've defined two attributes: failureKey and status. The first is in my table's key. The second is a key in the table's only GSI.
The first key-column in the global secondary index's key-schema has to be a hash type.
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Foobar
Resources:
FailuresTable:
Type: AWS::DynamoDB::Table
Properties:
AttributeDefinitions:
-
AttributeName: "failureKey"
AttributeType: "S"
-
AttributeName: "status"
AttributeType: "S"
KeySchema:
-
AttributeName: "failureKey"
KeyType: "HASH"
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
TableName: "Failures"
GlobalSecondaryIndexes:
-
IndexName: "failure-status"
KeySchema:
-
AttributeName: "status"
KeyType: "HASH"
Projection:
ProjectionType: "ALL"
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
Related
My apologies I'm starting with AWS and Cloudformation
I got this cloud formation template, I got Id and topic as a primary index and I would like to add a local secondary index that consists of the id and position columns to this template.
Id
topic
position
detaills
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
Env:
Type: String
CommitHash:
Type: String
Resources:
RecipeRecommendationDynamoDBTable:
Type: AWS::DynamoDB::Table
Properties:
AttributeDefinitions:
- AttributeName: "id"
AttributeType: "S"
- AttributeName: "topic"
AttributeType: "S"
KeySchema:
- AttributeName: "id"
KeyType: "HASH"
- AttributeName: "topic"
KeyType: "RANGE"
TimeToLiveSpecification:
AttributeName: ttl
Enabled: true
TableName: topics_dumps
BillingMode: PAY_PER_REQUEST
Tags:
- Key: "Env"
Value: !Ref Env
You have to add LocalSecondaryIndexes:
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
Env:
Type: String
CommitHash:
Type: String
Resources:
RecipeRecommendationDynamoDBTable:
Type: AWS::DynamoDB::Table
Properties:
AttributeDefinitions:
- AttributeName: "id"
AttributeType: "S"
- AttributeName: "topic"
AttributeType: "S"
- AttributeName: "position"
AttributeType: "S"
KeySchema:
- AttributeName: "id"
KeyType: "HASH"
- AttributeName: "topic"
KeyType: "RANGE"
TimeToLiveSpecification:
AttributeName: ttl
Enabled: true
LocalSecondaryIndexes:
- IndexName: position
KeySchema:
- AttributeName: "id"
KeyType: "HASH"
- AttributeName: "position"
KeyType: "RANGE"
Projection:
ProjectionType: ALL
TableName: topics_dumps
BillingMode: PAY_PER_REQUEST
Tags:
- Key: "Env"
Value: !Ref Env
I am sharing the DynamoDB cft below. I want to add a condition, so that while adding the another table the existing tables will not impact. Below template is used for creating 2 global table with name as sample1 and sample12 configuring in parameter section:
AWSTemplateFormatVersion: "2010-09-09"
Description: 'AWS CloudFormation Template for DynamoDB tables For sample Service'
Parameters:
sample1:
Type: String
Description: Select existing dynamodb table name from Parameter Store
Default: sample1
sample12:
Type: String
Description: Select existing dynamodb table name from Parameter Store
Default: sample12
Resources:
sample1:
Type: AWS::DynamoDB::GlobalTable
Properties:
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
-
AttributeName: "msgId"
AttributeType: "S"
KeySchema:
-
AttributeName: "msgId"
KeyType: "HASH"
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES
SSESpecification:
SSEEnabled: true
SSEType: "KMS"
Replicas:
- Region: us-east-1
TableName: !Ref sample1
sample12:
Type: AWS::DynamoDB::GlobalTable
Properties:
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
-
AttributeName: "msgId"
AttributeType: "S"
-
AttributeName: "flightNbr"
AttributeType: "S"
-
AttributeName: "recordUpdateTS"
AttributeType: "S"
-
AttributeName: "msgTypeCd"
AttributeType: "S"
-
AttributeName: "recordCreationEpochTime"
AttributeType: "S"
KeySchema:
-
AttributeName: "msgId"
KeyType: "HASH"
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES
SSESpecification:
SSEEnabled: true
SSEType: "KMS"
Replicas:
- Region: us-east-1
TableName: !Ref sample12
GlobalSecondaryIndexes:
-
IndexName: "FLIGHTNBR_UPDATETS_INDEX"
KeySchema:
-
AttributeName: "flightNbr"
KeyType: "HASH"
-
AttributeName: "recordUpdateTS"
KeyType: "RANGE"
Projection:
ProjectionType: "ALL"
-
IndexName: "MSGTYPE_CREATETS_INDEX"
KeySchema:
-
AttributeName: "msgTypeCd"
KeyType: "HASH"
-
AttributeName: "recordCreationEpochTime"
KeyType: "RANGE"
Projection:
ProjectionType: "ALL"
How can I add a condition or any other methods to check if table exists or not?
The only way to do this is through custom resource in the form of a lambda function. The function would use AWS SDK to perform conditional checks and create aws resources based on the outcome of these checks.
I have some code in my serverless.yml like this currently.
resources:
Resources:
uploadBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:service}-${self:custom.stage}-uploads
visitsTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:custom.visitsTable}
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
- AttributeName: visitId
AttributeType: S
- AttributeName: comments
AttributeType: S
- AttributeName: attachments
AttributeType: S
- AttributeName: ph
AttributeType: N
- AttributeName: ch
AttributeType: N
KeySchema:
- AttributeName: userId
KeyType: HASH
- AttributeName: visitId
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
My goal is to create a table with primary key userId, sort key visitId and have fields for comments, attachments, ph & ch. When I try to sls deploy I get the following error.
Serverless Error ---------------------------------------
An error occurred: visitsTable - Property AttributeDefinitions is inconsistent with the KeySchema of the table and the secondary indexes.
What am I doing wrong here?
Edit: Another attempt I tried
resources:
Resources:
uploadBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:service}-${self:custom.stage}-uploads
visitsTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:custom.visitsTable}
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
- AttributeName: visitId
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
- AttributeName: visitId
KeyType: RANGE
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
AWS DynamoDb is a NO-SQL type database and no need to define all the keys during the Table creation. Also from the AWS documentation it's clear that in Attribute Definition you have to specify the Key schema and indexes.
An array of attributes that describe the key schema for the table and indexes.
Please edit your code as below
resources:
Resources:
uploadBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:service}-${self:custom.stage}-uploads
visitsTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:custom.visitsTable}
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
- AttributeName: visitId
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
- AttributeName: visitId
KeyType: RANGE
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
For More CreateTable
I would like to make a cloud formation template to create a large number of dynamoDB tables. I understand how to map the AttributeDefintions to variables, but is it possible to create a single resource definition and then re-use that with mapped variables? Or must I declare each resource (table) statically?
This is an example of what I have for 4 tables, was hoping to condense this by re-using the resource definition rather than statically listing the block 4 times
Parameters:
ReadCapacityUnits:
Type: String
Default: "2"
WriteCapacityUnits:
Type: String
Default: "2"
Resources:
DynamoTableTotalCountsHour:
Type: "AWS::DynamoDB::Table"
Properties:
AttributeDefinitions:
-
AttributeName: "UserId"
AttributeType: "S"
-
AttributeName: "RangeId"
AttributeType: "S"
KeySchema:
-
AttributeName: "UserId"
KeyType: "HASH"
-
AttributeName: "RangeId"
KeyType: "RANGE"
ProvisionedThroughput:
ReadCapacityUnits: !Ref ReadCapacityUnits
WriteCapacityUnits: !Ref WriteCapacityUnits
TableName: TotalCountsHour
DynamoTableTotalCountsDay:
Type: "AWS::DynamoDB::Table"
Properties:
AttributeDefinitions:
-
AttributeName: "UserId"
AttributeType: "S"
-
AttributeName: "RangeId"
AttributeType: "S"
KeySchema:
-
AttributeName: "UserId"
KeyType: "HASH"
-
AttributeName: "RangeId"
KeyType: "RANGE"
ProvisionedThroughput:
ReadCapacityUnits: !Ref ReadCapacityUnits
WriteCapacityUnits: !Ref WriteCapacityUnits
TableName: TotalCountsDay
DynamoTableTotalCountsMonth:
Type: "AWS::DynamoDB::Table"
Properties:
AttributeDefinitions:
-
AttributeName: "UserId"
AttributeType: "S"
-
AttributeName: "RangeId"
AttributeType: "S"
KeySchema:
-
AttributeName: "UserId"
KeyType: "HASH"
-
AttributeName: "RangeId"
KeyType: "RANGE"
ProvisionedThroughput:
ReadCapacityUnits: !Ref ReadCapacityUnits
WriteCapacityUnits: !Ref WriteCapacityUnits
TableName: TotalCountsMonth
DynamoTableTotalCountsYear:
Type: "AWS::DynamoDB::Table"
Properties:
AttributeDefinitions:
-
AttributeName: "UserId"
AttributeType: "S"
-
AttributeName: "RangeId"
AttributeType: "S"
KeySchema:
-
AttributeName: "UserId"
KeyType: "HASH"
-
AttributeName: "RangeId"
KeyType: "RANGE"
ProvisionedThroughput:
ReadCapacityUnits: !Ref ReadCapacityUnits
WriteCapacityUnits: !Ref WriteCapacityUnits
TableName: TotalCountsYear
There is no loop function with CloudFormation itself.
You could use Nested Stacks to reuse the DynamoDB definition and minimise the amount of duplicated code.
For example call one stack from another:
Type: "AWS::CloudFormation::Stack"
Properties:
Parameters:
ReadCapacityUnits: 2
WriteCapacityUnits: 2
TemplateURL: Url-of-S3-Bucket-with-DynamoDB-Template-Stack
Note that using nested stacks with many tables does mean that you are at risk of having to delete/replace all your DynamoDB tables at the same time should you need to make some types of update to the stack.
If you don't want a dependency between the builds of DynamoDB tables, then use a template stack with an external orchestration engine to loop through the parameters and repeatedly call the AWS CloudFormation API.
I am trying to create a table using serverless framework and even though I have specified Projection for the GSI, serverless is complaining that property Projection cannot be empty.
Am I getting the syntax wrong?
If I remove the GSI section it works pretty fine.
Table1:
Type: "AWS::DynamoDB::Table"
Properties:
AttributeDefinitions:
- AttributeName: "uid"
AttributeType: "S"
- AttributeName: "bid"
AttributeType: "S"
KeySchema:
- AttributeName: "uid"
KeyType: "HASH"
- AttributeName: "bid"
KeyType: "RANGE"
GlobalSecondaryIndexes:
- IndexName: "bid-uid-index"
- KeySchema:
- AttributeName: "bid"
KeyType: "HASH"
- AttributeName: "uid"
KeyType: "RANGE"
- Projection:
- ProjectionType: "ALL"
- ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: "Table1"
Never mind, my syntax was wrong
GlobalSecondaryIndexes:
- IndexName: "bid-uid-index"
KeySchema:
- AttributeName: "bid"
KeyType: "HASH"
- AttributeName: "uid"
KeyType: "RANGE"
Projection:
ProjectionType: "ALL"
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
Changin it to above fixed the errors...