Unable to add GSI to DynamoDB table using CloudFormation - amazon-web-services

I have an existing DynamoDB table that is defined as part of a CloudFormation stack. According the the CFN AWS::DynamoDB::Table documentation the GlobalSecondaryIndexes attribute does not require replacement. It even goes into details with the following caveats.
You can delete or add one global secondary index without interruption.
As well as the following...
If you update a table to include a new global secondary index, AWS
CloudFormation initiates the index creation and then proceeds with the
stack update. AWS CloudFormation doesn't wait for the index to
complete creation because the backfilling phase can take a long time,
depending on the size of the table.
However, in practice when I attempt to perform an update I get the following error message:
CloudFormation cannot update a stack when a custom-named resource requires replacing. Rename mytablename and update the stack again.
Since I'm adding a GSI that uses a new attribute I'm forced to modify AttributeDefinitions which says it does require replacement. However, even when I try to add a GSI with only existing attributes defined in the AttributeDefinitions I still get the same error message.
Here is the snippet from my original CFN definition for my table:
{
"myTable": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"TableName": "mytablename",
"AttributeDefinitions": [
{
"AttributeName": "entryId",
"AttributeType": "S"
},
{
"AttributeName": "entryName",
"AttributeType": "S"
},
{
"AttributeName": "appId",
"AttributeType": "S"
}
],
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "entryId"
},
{
"KeyType": "RANGE",
"AttributeName": "entryName"
}
],
"ProvisionedThroughput": {
"ReadCapacityUnits": {
"Ref": "readThroughput"
},
"WriteCapacityUnits": {
"Ref": "writeThroughput"
}
},
"GlobalSecondaryIndexes": [
{
"IndexName": "appId-index",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "appId"
}
],
"Projection": {
"ProjectionType": "KEYS_ONLY"
},
"ProvisionedThroughput": {
"ReadCapacityUnits": {
"Ref": "readThroughput"
},
"WriteCapacityUnits": {
"Ref": "writeThroughput"
}
}
}
]
}
}
}
Here is what I want to update it to:
{
"myTable": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"TableName": "mytablename",
"AttributeDefinitions": [
{
"AttributeName": "entryId",
"AttributeType": "S"
},
{
"AttributeName": "entryName",
"AttributeType": "S"
},
{
"AttributeName": "appId",
"AttributeType": "S"
},
{
"AttributeName": "userId",
"AttributeType": "S"
}
],
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "entryId"
},
{
"KeyType": "RANGE",
"AttributeName": "entryName"
}
],
"ProvisionedThroughput": {
"ReadCapacityUnits": {
"Ref": "readThroughput"
},
"WriteCapacityUnits": {
"Ref": "writeThroughput"
}
},
"GlobalSecondaryIndexes": [
{
"IndexName": "appId-index",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "appId"
}
],
"Projection": {
"ProjectionType": "KEYS_ONLY"
},
"ProvisionedThroughput": {
"ReadCapacityUnits": {
"Ref": "readThroughput"
},
"WriteCapacityUnits": {
"Ref": "writeThroughput"
}
}
},
{
"IndexName": "userId-index",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "userId"
}
],
"Projection": {
"ProjectionType": "KEYS_ONLY"
},
"ProvisionedThroughput": {
"ReadCapacityUnits": {
"Ref": "readThroughput"
},
"WriteCapacityUnits": {
"Ref": "writeThroughput"
}
}
}
]
}
}
}
However, like I mentioned before even if I do not define userId in the AttributeDefinitions and use an existing attribute in a new GSI definition it does not work and fails with the same error message.

I had the same error today and got an answer from Amazon tech support. The problem is that you supplied a TableName field. CloudFormation wants to be in charge of naming your tables for you. Apparently, when you supply your own name for them, this is the error you get on a update that replaces the table (not sure why it needs to replace, but that's what the doc says)
For me, this makes CloudFormation utterly useless for maintaining my DynamoDB tables. I'd have to build in configuration so that my code could dynamically tell what the random table name was that CloudFormation generated for me.

AWS support's response to me FWIW:
Workaround A
Export data from the table to s3
Update stack with new tablename (tablename2) with gsi added
Note this losses all current entries, so definitely backup to s3 first!
Update stack again, back to using tablename1 in the dynamodb table
Import data from s3 This can be eased by using data pipelines, see
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBPipeline.html
Advantage is, app code can keep using fixed names. But updating the stack twice and exporting/importing data will take some work to automate in custom scripts.
Workaround B
Backup data
Let CloudFormation name the table
Use the AWS SDK to retrieve the table name by getting the name through describing the stack resource by logical-id, and fetching from the output the tablename there.
While I think this avoids extra stack updates (still think exporting/importing data will be required), the disadvantage is a network call in code to fetch the table name. See
* http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudFormation.html#describeStackResource-property
Again, this is a known issue support is pushing the service team on, as we know it is a quite common use case and point of pain. Please try a workaround on a test environment before testing on production.

How the issue happened here?
For me, delete the GSI manually in dynamoDB console, then add GSI by cloudformation, update-stack got this error.
Solution: remove the GSI in cloudformation, execute update-stack, then add back the GSI, execute update-stack again, works fine.
Guess cloudformation has its own cache, could not tell the change you've done manually in console.

My scenario was that I wanted to update a GSI by chnaging its range key.
- First you have to delete the GSI that you're updating, also remember to remove any AttributeDefinition that might not be needed anymore due to the removal of the GSI i.e. the index name etc. Upload the template via CloudFormation to apply the changes.
- Then add the needed Attributes and the 'updated' GSI to the template.

Backup all the data from the DynamoDB and after that, if you are using serverless, perform either of the command below:
individual remove:
node ./node_modules/serverless/bin/serverless remove
globally remove:
serverless remove
and deploy it again by running:
node ./node_modules/serverless/bin/serverless deploy -v
or
serverless deploy

Related

How we can use same Tags into two AWS::DynamoDB::Table with in Cloudformation Template

I'm trying to create Amazon DynamoDB tables using Cloud Formation Template. So my question is can I Used same Tags in multiple Tables using Ref.
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"Status": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"AttributeDefinitions": [
{
"AttributeName": "SId",
"AttributeType": "S"
}
],
"KeySchema": [
{
"AttributeName": "SId",
"KeyType": "HASH"
}
],
"ProvisionedThroughput": {
"ReadCapacityUnits": "1",
"WriteCapacityUnits": "1"
},
"TableName": "Statuscf",
"Tags": [
{
"Key": "Application",
"Value": "BFMS"
},
{
"Key": "Name",
"Value": "EventSourcingDataStore"
}
]
}
},
"BMSHSData": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"TableName": "Billing.FmsDatacf",
"Tags": [{"Ref":"/Status/Tags"}]
}
}
}
Please suggest me how I can used same tags in another table. I am using Like this "Tags": [{"Ref":"/Status/Tags"}].
The only way to do this using plain CloudFormation is by copy-and-paste. So you have to replicate your tags for all tables "manually". The only automated solution would be through development of CloduFormation macro or custom resource. Yet another choice could be through the use of nested stacks.
To resolved this problem, Just need to pass Tags Properties in the Parameter Section of CF.
Then used those tags in DynamoDB like this

Why does adding a secondary index to a dynamodb table via cdk requires a recreation of the table?

adding a secondary index to a (previously via cdk deployed) dynamodb table like
table.addLocalSecondaryIndex({
indexName: "indexName",
sortKey: {
name: "keyName",
type: dynamodb.AttributeType.STRING,
},
projectionType: dynamodb.ProjectionType.INCLUDE,
nonKeyAttributes: ["attr1", "attr2"],
});
requieres a recreation of the table as seen in the cloudformation change set created by cdk:
"resourceChange": {
"logicalResourceId": "---ID---",
"action": "Modify",
"physicalResourceId": "---ID---",
"resourceType": "AWS::DynamoDB::Table",
"replacement": "True",
"moduleInfo": null,
"details": [
{
"target": {
"name": "LocalSecondaryIndexes",
"requiresRecreation": "Always",
"attribute": "Properties"
},
"causingEntity": null,
"evaluation": "Static",
"changeSource": "DirectModification"
}
],
"changeSetId": null,
"scope": [
"Properties"
]
},
"type": "Resource"
}
Why is that?
Can that be prevented somehow or is there a Workaround besides adding the index manually via the aws-console?
Cheers,
Helge
Why is that?
Changes to LocalSecondaryIndexes require replacement of DynamoDB table, since LSI can only be created on the table create time. In contrast, modifications to GlobalSecondaryIndexes lead to no interruption.
Can that be prevented somehow or is there a Workaround besides adding the index manually via the aws-console?
Sadly, there is no way to prevent this, as explained above. You can use GSI if you don't want to keep replacing your tables.
You are mistaking Global and Local secondary indexes. Local can not be created after table creation, only global can. That's the reason your code fails.

How do you insert values into dynamodb through cloudformation?

I'm creating a table in cloudformation:
"MyStuffTable": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"TableName": "MyStuff"
"AttributeDefinitions": [{
"AttributeName": "identifier",
"AttributeType": "S"
]},
"KeySchema": [{
"AttributeName": "identifier",
"KeyType": "HASH",
}],
"ProvisionedThroughput": {
"ReadCapacityUnits": "5",
"WriteCapacityUnits": "1"
}
}
}
Then later on in the cloudformation, I want to insert records into that table, something like this:
identifier: Stuff1
data: {My list of stuff here}
And insert that into values in the code below. I had seen somewhere an example that used Custom::Install, but I can't find it now, or any documentation on it.
So this is what I have:
MyStuff: {
"Type": "Custom::Install",
"DependsOn": [
"MyStuffTable"
],
"Properties": {
"ServiceToken": {
"Fn::GetAtt": ["MyStuffTable","Arn"]
},
"Action": "fields",
"Values": [{<insert records into this array}]
}
};
When I run that, I'm getting this Invalid service token.
So I'm not doing something right trying to reference the table to insert the records into. I can't seem to find any documentation on Custom::Install, so I don't know for sure that it's the right way to go about inserting records through cloudformation. I also can't seem to find documentation on inserting records through cloudformation. I know it can be done. I'm probably missing something very simple. Any ideas?
Custom::Install is a Custom Resource in CloudFormation.
This is a special type of resource which you have to develop yourself. This is mostly done by means of Lambda Function (can also be SNS).
So to answer your question. To add data to your table, you would have to write your own custom resource in lambda. The lambda would put records into the table.
Action and fields are custom parameters which CloudFormation passes to the lambda in the example of Custom::Install. The parameters can be anything you want, as you are designing the custom resource tailored to your requirements.

Creating dynamodb table using aws cli "--cli-input-json"

i have been trying to create a dynamo db table using the following json(testbootstraptable.json) file:
{
"AttributeDefinitions": [
{
"AttributeName": "test1",
"AttributeType": "S"
},
{
"AttributeName": "test2",
"AttributeType": "S"
}
],
"TableName": "BOOTSTRAP_TEST_TBL",
"KeySchema": [
{
"AttributeName": "test1",
"KeyType": "HASH"
},
{
"AttributeName": "test2",
"KeyType": "RANGE"
}
],
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"ReadCapacityUnits": 35,
"WriteCapacityUnits": 35
}
}
I have tried multiple times with different variations based on google search but keep getting the following error:
Error parsing parameter 'cli-input-json': Invalid JSON: Expecting value: line 1 column 1 (char 0)
JSON received: testbootstraptable.json
AWS Command:
$ aws dynamodb create-table --cli-input-json testbootstraptable.json --region us-west-2
Add "file://" to testbootstraptable.json
aws dynamodb create-table --cli-input-json file://testbootstraptable.json --region us-west-2
Also, delete the following line as it is not correct:
"NumberOfDecreasesToday": 0,
This is related to the question.
===
I initially started with JSON, wasted a lot time.
Then switched to GUI version of creating tables.
Installed & used NoSQL Workbench for DynamoDB
Download link: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/workbench.settingup.html

How to check if specific resource already exists in CloudFormation script

I am using cloudformation to create a stack which inlcudes an autoscaled ec2 instance and an S3 bucket. For the S3 bucket I have DeletionPolicy set to Retain, which works fine, until I want to rerun my cloudformation script again. Since on previous runs, the script created the S3 bucket, it fails on subsequent runs saying my S3 bucket already exists. None of the other resources of course get created as well. My question is how do I check if my S3 bucket exists first inside the cloudformation script, and if it does, then skip creating that resources. I've looked in conditions in the AWS, but it seems all parameter based, I have yet to find a function which checks from existing resources.
There is no obvious way to do this, unless you create the template dynamically with an explicit check. Stacks created from the same template are independent entities, and if you create a stack that contains a bucket, delete the stack while retaining the bucket, and then create a new stack (even one with the same name), there is no connection between this new stack and the bucket created as part of the previous stack.
If you want to use the same S3 bucket for multiple stacks (even if only one of them exists at a time), that bucket does not really belong in the stack - it would make more sense to create the bucket in a separate stack, using a separate template (putting the bucket URL in the "Outputs" section), and then referencing it from your original stack using a parameter.
Update November 2019:
There is a possible alternative now. On Nov 13th AWS launched CloudFormation Resource Import. With that feature you can now creating a stack from existing resources. Currently not many resource types are supported by this feature, but S3 buckets are.
In your case you'd have to do it in two steps:
Create a template that only contains the preexisting S3 bucket using the "Create stack" > "With existing resources (import resources)" (this is the --change-set-type IMPORT flag in the CLI) (see docs)
Update the the template to include all resources that don't already exist.
As they note in their documentation; this feature is very versatile. So it opens up a lot of possibilities. See docs for more info.
Using cloudformation you can use Conditions
I created an input parameter "ShouldCreateBucketInputParameter" and then using CLI you just need to set "true" or "false"
Cloudformation json file:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Transform": "AWS::Serverless-2016-10-31",
"Description": "",
"Parameters": {
"ShouldCreateBucketInputParameter": {
"Type": "String",
"AllowedValues": [
"true",
"false"
],
"Description": "If true then the S3 bucket that will be proxied will be created with the CloudFormation stack."
}
},
"Conditions": {
"CreateS3Bucket": {
"Fn::Equals": [
{
"Ref": "ShouldCreateBucketInputParameter"
},
"true"
]
}
},
"Resources": {
"SerialNumberBucketResource": {
"Type": "AWS::S3::Bucket",
"Condition": "CreateS3Bucket",
"Properties": {
"AccessControl": "Private"
}
}
},
"Outputs": {}
}
And then (I am using CLI do deploy the stack)
aws cloudformation deploy --template ./s3BucketWithCondition.json --stack-name bucket-stack --parameter-overrides ShouldCreateBucketInputParameter="true" S3BucketNameInputParameter="BucketName22211"
Just add an input parameter to the CloudFormation template to indicate that an existing bucket should be used.... unless you don't already know at the time when you are going to use the template? Then you can either add a new resource or not based on the parameter value.
If you do updates, (potentially of stacks within stacks aka Nested Stacks), the unchanged parts don't get updated.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-stack.html?icmpid=docs_cfn_console_designer
You can then set policies as mentioned to prevent deletion. [remember 'cancel update' permissions for rollbacks]
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html
There is also Cross-Stack Output to be aware of by adding Export Names to the Stack Outputs.
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/outputs-section-structure.html
Walkthrough...
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/walkthrough-crossstackref.html
Then you need to use Fn::ImportValue ...
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-importvalue.html
It implies one could use a network stack name parameter.
Unfortunately you get an error like this when you try them in Conditions.
Template validation error: Template error: Cannot use Fn::ImportValue
in Conditions.
Or in the Parameters?
Template validation error: Template format error: Every Default member
must be a string.
Also this can happen while trying...
Template format error: Output ExportOut is malformed. The Name field
of Export must not depend on any resources, imported values, or
Fn::GetAZs.
So you can't stop it making the existing resource again from the same file. Only when putting it into another stack and using the export import reference.
But if you separate the two then there is a dependency that will stop and rollback for instance a dependency's deletion, thanks to the reference via the ImportValue function.
Example Given here is:
First Make a Group Template
{
"AWSTemplateFormatVersion": "2010-09-09",
"Metadata": {
"AWS::CloudFormation::Designer": {
"6927bf3d-85ec-449d-8ee1-f3e1804d78f7": {
"size": {
"width": 60,
"height": 60
},
"position": {
"x": -390,
"y": 130
},
"z": 0,
"embeds": []
},
"6fe3a2b8-16a1-4ce0-b412-4d4f87e9c54c": {
"source": {
"id": "ac295134-9e38-4425-8d20-2c50ef0d51b3"
},
"target": {
"id": "6927bf3d-85ec-449d-8ee1-f3e1804d78f7"
},
"z": 1
}
}
},
"Resources": {
"TestGroup": {
"Type": "AWS::IAM::Group",
"Properties": {},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "6927bf3d-85ec-449d-8ee1-f3e1804d78f7"
}
},
"Condition": ""
}
},
"Parameters": {},
"Outputs": {
"GroupNameOut": {
"Description": "The Group Name",
"Value": {
"Ref": "TestGroup"
},
"Export": {
"Name": "Exported-GroupName"
}
}
}
}
Then make a User Template that needs the group.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Metadata": {
"AWS::CloudFormation::Designer": {
"ac295134-9e38-4425-8d20-2c50ef0d51b3": {
"size": {
"width": 60,
"height": 60
},
"position": {
"x": -450,
"y": 130
},
"z": 0,
"embeds": [],
"isrelatedto": [
"6927bf3d-85ec-449d-8ee1-f3e1804d78f7"
]
},
"6fe3a2b8-16a1-4ce0-b412-4d4f87e9c54c": {
"source": {
"id": "ac295134-9e38-4425-8d20-2c50ef0d51b3"
},
"target": {
"id": "6927bf3d-85ec-449d-8ee1-f3e1804d78f7"
},
"z": 1
}
}
},
"Resources": {
"TestUser": {
"Type": "AWS::IAM::User",
"Properties": {
"UserName": {
"Ref": "UserNameParam"
},
"Groups": [
{
"Fn::ImportValue": "Exported-GroupName"
}
]
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "ac295134-9e38-4425-8d20-2c50ef0d51b3"
}
}
}
},
"Parameters": {
"UserNameParam": {
"Default": "testerUser",
"Description": "Username For Test",
"Type": "String",
"MinLength": "1",
"MaxLength": "16",
"AllowedPattern": "[a-zA-Z][a-zA-Z0-9]*",
"ConstraintDescription": "must begin with a letter and contain only alphanumeric characters."
}
},
"Outputs": {
"UserNameOut": {
"Description": "The User Name",
"Value": {
"Ref": "TestUser"
}
}
}
}
You will get
No export named Exported-GroupName found. Rollback requested by user.
if running User with no Group found Exported.
You could then use the Nested stack approach.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Metadata": {
"AWS::CloudFormation::Designer": {
"66470873-b2bd-4a5a-af19-5d54b11f48ef": {
"size": {
"width": 60,
"height": 60
},
"position": {
"x": -815,
"y": 169
},
"z": 0,
"embeds": []
},
"ed1de011-f1bb-4788-b63e-dcf5494d10d1": {
"size": {
"width": 60,
"height": 60
},
"position": {
"x": -710,
"y": 170
},
"z": 0,
"dependson": [
"66470873-b2bd-4a5a-af19-5d54b11f48ef"
]
},
"c978f2d9-3fb2-4420-b255-74941f10a28a": {
"source": {
"id": "ed1de011-f1bb-4788-b63e-dcf5494d10d1"
},
"target": {
"id": "66470873-b2bd-4a5a-af19-5d54b11f48ef"
},
"z": 1
}
}
},
"Resources": {
"GroupStack": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "https://s3-us-west-2.amazonaws.com/cf-templates-x-TestGroup.json"
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "66470873-b2bd-4a5a-af19-5d54b11f48ef"
}
}
},
"UserStack": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "https://s3-us-west-2.amazonaws.com/cf-templates-x-TestUserFindsGroup.json"
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "ed1de011-f1bb-4788-b63e-dcf5494d10d1"
}
},
"DependsOn": [
"GroupStack"
]
}
}
}
Unfortunately you can still delete the User stack even though it was made by MultiStack in this example but with deletion policies and other things it just might help.
Then you are only Updating the various stacks it creates, and you won't do the Multi Stack if you're for instance reusing a Bucket.
Otherwise you'll be looking at APIs and scripts in various flavors.
If you're trying to incorporate some existing resources into CF, it is unfortunately not possible. If you just want a set of resources to be part of your template or not depending on the value of some parameters, you can use Conditions. But they don't change the nature of CF itself, and only work to determine which resources are desired, not what actions will be taken, and cannot see whether a resource exists or not beforehand.
Something not explicitly stated. If your first deployment fails, resources will be deleted unless you have at retention policy. In this case it is safe to delete the resource in question manually. Next deployment will recreate it without generating error that resource already exists.