I am trying to create a table using serverless framework and even though I have specified Projection for the GSI, serverless is complaining that property Projection cannot be empty.
Am I getting the syntax wrong?
If I remove the GSI section it works pretty fine.
Table1:
Type: "AWS::DynamoDB::Table"
Properties:
AttributeDefinitions:
- AttributeName: "uid"
AttributeType: "S"
- AttributeName: "bid"
AttributeType: "S"
KeySchema:
- AttributeName: "uid"
KeyType: "HASH"
- AttributeName: "bid"
KeyType: "RANGE"
GlobalSecondaryIndexes:
- IndexName: "bid-uid-index"
- KeySchema:
- AttributeName: "bid"
KeyType: "HASH"
- AttributeName: "uid"
KeyType: "RANGE"
- Projection:
- ProjectionType: "ALL"
- ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: "Table1"
Never mind, my syntax was wrong
GlobalSecondaryIndexes:
- IndexName: "bid-uid-index"
KeySchema:
- AttributeName: "bid"
KeyType: "HASH"
- AttributeName: "uid"
KeyType: "RANGE"
Projection:
ProjectionType: "ALL"
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
Changin it to above fixed the errors...
Related
I have a cloudformation template regarding the dynamodb. I added new index called customerId-index as below:
ComponentsTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:custom.base}-components
PointInTimeRecoverySpecification:
PointInTimeRecoveryEnabled: True
StreamSpecification:
StreamViewType: NEW_IMAGE
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: assetId
AttributeType: S
- AttributeName: componentId
AttributeType: S
- AttributeName: componentType
AttributeType: S
- AttributeName: customerId
AttributeType: S
KeySchema:
- AttributeName: componentId
KeyType: HASH
- AttributeName: assetId
KeyType: RANGE
LocalSecondaryIndexes:
- IndexName: componentType-index
KeySchema:
- AttributeName: componentId
KeyType: HASH
- AttributeName: componentType
KeyType: RANGE
Projection:
ProjectionType: ALL
GlobalSecondaryIndexes:
- IndexName: assetId-index
KeySchema:
- AttributeName: assetId
KeyType: HASH
- AttributeName: componentId
KeyType: RANGE
Projection:
ProjectionType: ALL
- IndexName: compoentType-gsi
KeySchema:
- AttributeName: componentType
KeyType: HASH
- AttributeName: componentId
KeyType: RANGE
Projection:
ProjectionType: ALL
- IndexName: customerId-index
KeySchema:
- AttributeName: customerId
KeyType: HASH
- AttributeName: siteId
KeyType: RANGE
Projection:
ProjectionType: ALL
And although I added AttributeName for customerId in AttributeDefinitions, I am still getting the following error:
ValidationException: Global Secondary Index range key not specified in Attribute Definitions.Type unknown.
But the specified index range key is and its type already defined as below:
AttributeDefinitions:
- AttributeName: assetId
AttributeType: S
- AttributeName: componentId
AttributeType: S
- AttributeName: componentType
AttributeType: S
- AttributeName: customerId
AttributeType: S
I wonder if someone can help with the problem. Thanks.
- IndexName: customerId-index
KeySchema:
- AttributeName: customerId
KeyType: HASH
- AttributeName: siteId
KeyType: RANGE
Your final index uses siteId which is not defined in the Attribute Definitions. Try below:
AttributeDefinitions:
- AttributeName: assetId
AttributeType: S
- AttributeName: componentId
AttributeType: S
- AttributeName: componentType
AttributeType: S
- AttributeName: customerId
AttributeType: S
- AttributeName: siteId
AttributeType: S
In the updated question there is this Global Secondary Index:
- IndexName: customerId-index
KeySchema:
- AttributeName: customerId
KeyType: HASH
- AttributeName: siteId
KeyType: RANGE
Here, you're referring to siteId which is missing from your AttributeDefinitions list. You probably want to add something like this:
AttributeDefinitions:
- AttributeName: siteId
AttributeType: S
DynamoDB needs to know the data types to create Secondary Indexes. That's why you need to ensure all attributes that are used in indexes are specified.
I have some DynamoDb tables populated by data and which don't have a sort key configured. I read DDB doesn't let you add a sort key only at table creation and the only solution is to create a new table with the sort key configured.The problem is I need to keep the data I have stored but add a sort key to these tables. As a quick mention I'm deploying my backend using Serverless framework
I think one solution would be to use AWS's Data Pipeline service but I want to know if there are other options available. Thanks in advance.
EDIT:
My template for this resource looks like this:
resources:
Resources:
userGroupsTable:
Type: 'AWS::DynamoDB::Table'
Properties:
TableName: ${opt:stage, self:provider.stage, 'local'}-userGroups
AttributeDefinitions:
- AttributeName: 'userId'
AttributeType: 'S'
- AttributeName: 'organizationId'
AttributeType: 'S'
KeySchema:
- AttributeName: 'userId'
KeyType: 'HASH'
- AttributeName: 'organizationId'
KeyType: 'RANGE'
GlobalSecondaryIndexes:
- IndexName: 'organizationIdIndex'
KeySchema:
- AttributeName: 'organizationId'
KeyType: 'HASH'
Projection:
ProjectionType: 'ALL'
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
StreamSpecification:
StreamViewType: "NEW_AND_OLD_IMAGES"
When I attempt to build the following:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Foobar
Resources:
FailuresTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: Failures
AttributeDefinitions:
-
AttributeName: failureKey
AttributeType: S
-
AttributeName: status,
AttributeType: S
KeySchema:
-
AttributeName: failureKey
KeyType: HASH
GlobalSecondaryIndexes:
-
IndexName: failure-status
KeySchema:
- AttributeName: status
KeyType: RANGE
Projection:
ProjectionType: ALL
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 15
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 15
I get an error, "Property AttributeDefinitions is inconsistent with the KeySchema of the table and the secondary indexes".
I've defined two attributes: failureKey and status. The first is in my table's key. The second is a key in the table's only GSI.
The first key-column in the global secondary index's key-schema has to be a hash type.
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Foobar
Resources:
FailuresTable:
Type: AWS::DynamoDB::Table
Properties:
AttributeDefinitions:
-
AttributeName: "failureKey"
AttributeType: "S"
-
AttributeName: "status"
AttributeType: "S"
KeySchema:
-
AttributeName: "failureKey"
KeyType: "HASH"
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
TableName: "Failures"
GlobalSecondaryIndexes:
-
IndexName: "failure-status"
KeySchema:
-
AttributeName: "status"
KeyType: "HASH"
Projection:
ProjectionType: "ALL"
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
I have some code in my serverless.yml like this currently.
resources:
Resources:
uploadBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:service}-${self:custom.stage}-uploads
visitsTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:custom.visitsTable}
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
- AttributeName: visitId
AttributeType: S
- AttributeName: comments
AttributeType: S
- AttributeName: attachments
AttributeType: S
- AttributeName: ph
AttributeType: N
- AttributeName: ch
AttributeType: N
KeySchema:
- AttributeName: userId
KeyType: HASH
- AttributeName: visitId
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
My goal is to create a table with primary key userId, sort key visitId and have fields for comments, attachments, ph & ch. When I try to sls deploy I get the following error.
Serverless Error ---------------------------------------
An error occurred: visitsTable - Property AttributeDefinitions is inconsistent with the KeySchema of the table and the secondary indexes.
What am I doing wrong here?
Edit: Another attempt I tried
resources:
Resources:
uploadBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:service}-${self:custom.stage}-uploads
visitsTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:custom.visitsTable}
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
- AttributeName: visitId
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
- AttributeName: visitId
KeyType: RANGE
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
AWS DynamoDb is a NO-SQL type database and no need to define all the keys during the Table creation. Also from the AWS documentation it's clear that in Attribute Definition you have to specify the Key schema and indexes.
An array of attributes that describe the key schema for the table and indexes.
Please edit your code as below
resources:
Resources:
uploadBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:service}-${self:custom.stage}-uploads
visitsTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:custom.visitsTable}
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
- AttributeName: visitId
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
- AttributeName: visitId
KeyType: RANGE
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
For More CreateTable
I would like to make a cloud formation template to create a large number of dynamoDB tables. I understand how to map the AttributeDefintions to variables, but is it possible to create a single resource definition and then re-use that with mapped variables? Or must I declare each resource (table) statically?
This is an example of what I have for 4 tables, was hoping to condense this by re-using the resource definition rather than statically listing the block 4 times
Parameters:
ReadCapacityUnits:
Type: String
Default: "2"
WriteCapacityUnits:
Type: String
Default: "2"
Resources:
DynamoTableTotalCountsHour:
Type: "AWS::DynamoDB::Table"
Properties:
AttributeDefinitions:
-
AttributeName: "UserId"
AttributeType: "S"
-
AttributeName: "RangeId"
AttributeType: "S"
KeySchema:
-
AttributeName: "UserId"
KeyType: "HASH"
-
AttributeName: "RangeId"
KeyType: "RANGE"
ProvisionedThroughput:
ReadCapacityUnits: !Ref ReadCapacityUnits
WriteCapacityUnits: !Ref WriteCapacityUnits
TableName: TotalCountsHour
DynamoTableTotalCountsDay:
Type: "AWS::DynamoDB::Table"
Properties:
AttributeDefinitions:
-
AttributeName: "UserId"
AttributeType: "S"
-
AttributeName: "RangeId"
AttributeType: "S"
KeySchema:
-
AttributeName: "UserId"
KeyType: "HASH"
-
AttributeName: "RangeId"
KeyType: "RANGE"
ProvisionedThroughput:
ReadCapacityUnits: !Ref ReadCapacityUnits
WriteCapacityUnits: !Ref WriteCapacityUnits
TableName: TotalCountsDay
DynamoTableTotalCountsMonth:
Type: "AWS::DynamoDB::Table"
Properties:
AttributeDefinitions:
-
AttributeName: "UserId"
AttributeType: "S"
-
AttributeName: "RangeId"
AttributeType: "S"
KeySchema:
-
AttributeName: "UserId"
KeyType: "HASH"
-
AttributeName: "RangeId"
KeyType: "RANGE"
ProvisionedThroughput:
ReadCapacityUnits: !Ref ReadCapacityUnits
WriteCapacityUnits: !Ref WriteCapacityUnits
TableName: TotalCountsMonth
DynamoTableTotalCountsYear:
Type: "AWS::DynamoDB::Table"
Properties:
AttributeDefinitions:
-
AttributeName: "UserId"
AttributeType: "S"
-
AttributeName: "RangeId"
AttributeType: "S"
KeySchema:
-
AttributeName: "UserId"
KeyType: "HASH"
-
AttributeName: "RangeId"
KeyType: "RANGE"
ProvisionedThroughput:
ReadCapacityUnits: !Ref ReadCapacityUnits
WriteCapacityUnits: !Ref WriteCapacityUnits
TableName: TotalCountsYear
There is no loop function with CloudFormation itself.
You could use Nested Stacks to reuse the DynamoDB definition and minimise the amount of duplicated code.
For example call one stack from another:
Type: "AWS::CloudFormation::Stack"
Properties:
Parameters:
ReadCapacityUnits: 2
WriteCapacityUnits: 2
TemplateURL: Url-of-S3-Bucket-with-DynamoDB-Template-Stack
Note that using nested stacks with many tables does mean that you are at risk of having to delete/replace all your DynamoDB tables at the same time should you need to make some types of update to the stack.
If you don't want a dependency between the builds of DynamoDB tables, then use a template stack with an external orchestration engine to loop through the parameters and repeatedly call the AWS CloudFormation API.