I'm creating a table in cloudformation:
"MyStuffTable": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"TableName": "MyStuff"
"AttributeDefinitions": [{
"AttributeName": "identifier",
"AttributeType": "S"
]},
"KeySchema": [{
"AttributeName": "identifier",
"KeyType": "HASH",
}],
"ProvisionedThroughput": {
"ReadCapacityUnits": "5",
"WriteCapacityUnits": "1"
}
}
}
Then later on in the cloudformation, I want to insert records into that table, something like this:
identifier: Stuff1
data: {My list of stuff here}
And insert that into values in the code below. I had seen somewhere an example that used Custom::Install, but I can't find it now, or any documentation on it.
So this is what I have:
MyStuff: {
"Type": "Custom::Install",
"DependsOn": [
"MyStuffTable"
],
"Properties": {
"ServiceToken": {
"Fn::GetAtt": ["MyStuffTable","Arn"]
},
"Action": "fields",
"Values": [{<insert records into this array}]
}
};
When I run that, I'm getting this Invalid service token.
So I'm not doing something right trying to reference the table to insert the records into. I can't seem to find any documentation on Custom::Install, so I don't know for sure that it's the right way to go about inserting records through cloudformation. I also can't seem to find documentation on inserting records through cloudformation. I know it can be done. I'm probably missing something very simple. Any ideas?
Custom::Install is a Custom Resource in CloudFormation.
This is a special type of resource which you have to develop yourself. This is mostly done by means of Lambda Function (can also be SNS).
So to answer your question. To add data to your table, you would have to write your own custom resource in lambda. The lambda would put records into the table.
Action and fields are custom parameters which CloudFormation passes to the lambda in the example of Custom::Install. The parameters can be anything you want, as you are designing the custom resource tailored to your requirements.
Related
We have a use case where we have enabled a AWS DMS replication task which streams changes to our Aurora Postgres cluster to a Kinesis Data stream. The replication task is working as expected but the data that its sending to Kinesis Data Stream as json contains fields like metadata that we don't care about and would ideally like to omit them. Is there a way to do this without triggering a Lambda on KDS to remove the unwanted fields from the json?
I was looking at using table mappings config of the DMS task when KDS is the target, documentation here - https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kinesis.html. The docs don't mention anything of this sort. Maybe I am missing something.
The current table mapping for my usecase is as follows -
{
"rules": [
{
"rule-type": "selection",
"rule-id": "1",
"rule-name": "1",
"rule-action": "include",
"object-locator": {
"schema-name": "public",
"table-name": "%"
}
},
{
"rule-type": "object-mapping",
"rule-id": "2",
"rule-name": "DefaultMapToKinesis",
"rule-action": "map-record-to-record",
"object-locator": {
"schema-name": "public",
"table-name": "testing"
}
}
]
}
The table testing only has two columns namely id and value of type varchar and decimal respectively.
The result I am getting in KDS is as follows -
{
"data": {
"id": "5",
"value": 1111.22
},
"metadata": {
"timestamp": "2022-08-23T09:32:34.222745Z",
"record-type": "data",
"operation": "insert",
"partition-key-type": "schema-table",
"schema-name": "public",
"table-name": "testing",
"transaction-id": 145524
}
}
As seen above we are only interested in the data key of the json.
Is there any way in DMS config or KDS to filter on the data portion of the json sent by DMS without involving any new infra like Lambda?
I'm starting to think there is a fundamental flaw in AWS Cloudformation Template validation/resource lookup related to "Type": "AWS::ElasticLoadBalancingV2::ListenerRule", resources.
Specifically, every time I try to create a new ListenerRule for known working Listeners, Cloudformation errors out with
Unable to retrieve ListenerArn attribute for AWS::ElasticLoadBalancingV2::Listener, with error message One or more listeners not found (Service: ElasticLoadBalancingV2, Status Code: 400, Request ID: c6914f71-074c-4367-983a-bcf1d8fd1350, Extended Request ID: null)
Upon testing, I can make it work by hardcoding the ListenArn attribute in my template, but that's not a solution since the template is used for multiple Stacks with different resources.
Below are the relevant parts of the template:
"WLBListenerHttp": {
"Type": "AWS::ElasticLoadBalancingV2::Listener",
"Properties": {
"DefaultActions": [{
"Type": "forward",
"TargetGroupArn": { "Ref": "WLBTargetGroupHttp" }
}],
"LoadBalancerArn": { "Ref": "WebLoadBalancer" },
"Port": 80,
"Protocol": "HTTP"
}
},
"ListenerRuleHttp": {
"DependsOn": "WLBListenerHttp",
"Type": "AWS::ElasticLoadBalancingV2::ListenerRule",
"Properties": {
"Actions": [{
"Type": "fixed-response",
"FixedResponseConfig": { "StatusCode": "200" }
}],
"Conditions": [{
"Field": "host-header",
"HostHeaderConfig": { "Values": ["domain*"] }
}, {
"Field": "path-pattern",
"PathPatternConfig": { "Values": ["/path/to/respond/to"] }
}],
"ListenerArn": { "Fn::GetAtt": ["WLBListenerHttp", "ListenerArn"] },
"Priority": 1
}
},
Per the documentation on listeners, Fn::GetAtt or Ref should both return the ListenerARN:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticloadbalancingv2-listener.html
"Return values
Ref
When you pass the logical ID of this resource to the intrinsic Ref function, Ref returns the Amazon Resource Name (ARN) of the listener.
For more information about using the Ref function, see Ref.
Fn::GetAtt
The Fn::GetAtt intrinsic function returns a value for a specified attribute of this type. The following are the available attributes and sample return values.
For more information about using the Fn::GetAtt intrinsic function, see Fn::GetAtt.
ListenerArn
The Amazon Resource Name (ARN) of the listener."
I've tried both "ListenerArn": { "Fn::GetAtt": ["WLBListenerHttp", "ListenerArn"] }, and "ListenerArn": { "Ref": "WLBListenerHttp"}, with no success, resulting in the error noted. If I hardcode the Arn "ListenerArn": "arn::", with the full Arn, it works fine.
As it turns out, my syntax was perfectly fine. However, what I didn't realize is that while the WLBListenerHttp resource existed, it was not actually the same ARN as the one created by CloudFormation. Apparently, someone accidentally deleted it at some point without telling us and then manually recreated it. This left the account in a broken state where CloudFormation had an ARN recorded for the listener from when it was created, but it was truly no longer valid since the new resource had a new ARN.
The solution to this was to delete the offending resource manually, then change the name of it slightly in our CloudFormation templates so it would create a new one.
Suppose I am working on an app for a grocery shop. We all know there are hundreds of grocery items in a grocery shop. Now, my requirement is to create a AWS Cloudwatch alarm using AWS Cloudformation template (CFT).
Earlier suppose we have only rice and wheat in our grocery shop and thus we created separate alarm resources in the CFT. An example:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "AWS cloudwatch Grocery",
"Parameters": {
"Email": {
"Type": "String",
"Description": "Email address to notify when alarm is triggered",
"Default": "email#email.com"
}
},
"Resources": {
"AlarmNotificationTopic": {
"Type": "AWS::SNS::Topic",
"Properties": {
"Subscription": [
{
"Endpoint": {
"Ref": "Email"
},
"Protocol": "email"
}
]
}
},
"RiceQuantityLowAlarm": {
"Type": "AWS::CloudWatch::Alarm",
"Properties": {
"AlarmName": "RiceQuantityLowAlarm",
"AlarmDescription": "Alarm which gets triggered when Rice quantity is low",
"AlarmActions": [
{
"Ref": "AlarmNotificationTopicTest"
}
],
"MetricName": "Quantity",
"Namespace": "Grocery",
"Dimensions": [
{
"Name": "Item",
"Value": "Rice"
}
],
"ComparisonOperator": "LessThanOrEqualToThreshold",
"EvaluationPeriods": "10",
"Period": "360",
"Statistic": "Sum",
"Threshold": "1",
"TreatMissingData": "notBreaching"
}
},
"WheatQuantityLowAlarm": {
"Type": "AWS::CloudWatch::Alarm",
"Properties": {
"AlarmName": "WheatQuantityLowAlarm",
"AlarmDescription": "Alarm which gets triggered when Wheat quantity is low",
"AlarmActions": [
{
"Ref": "AlarmNotificationTopicTest"
}
],
"MetricName": "Quantity",
"Namespace": "Grocery",
"Dimensions": [
{
"Name": "Item",
"Value": "Wheat"
}
],
"ComparisonOperator": "LessThanOrEqualToThreshold",
"EvaluationPeriods": "10",
"Period": "360",
"Statistic": "Sum",
"Threshold": "1",
"TreatMissingData": "notBreaching"
}
}
}
}
Now lets suppose I want to add more items into my grocery shop and I do not want to be limited only to rice and wheat. In this case, let's assume I want to add 5 new items into my grocery shop. So, if I follow the above approach I will be creating 5 new separate cloudwatch alarm resources into the CFT and will do so whenever any new item comes. But I DO NOT WANT TO DO THAT AS I AM VERY LAZY.
Is there any way we can standardize the CFT resources ? You can see there is only the difference in name (rice/wheat) above in CFT cloudwatch alarm resources and rest everything is common between both.
This is not really possible with pure CloudFormation. The templates are declarative and you cannot use code constructs such as loops to generate resources. The following list outlines some ways (in no particular order) you could make the templates more dynamic or be able to reuse code:
Use a nested stack to reduce the amount of code to manage
Write a custom resource that accepts a list of items and maintains the alarms by using code that uses the sdk
Generate your templates with a scripting/programming language using a third party library such as SparkleFormation or troposphere
Use a different IaC tool such as
Terraform (allows some programming-like constructs and more flexible than CF) or the AWS CDK (write real code in a variety of languages that compiles down to CloudFormation templates)
Each of these have their own pros and cons, and all involve significantly more work than ctrl/cmd+c, ctrl/cmd+v so bear this in mind when making a decision!
adding a secondary index to a (previously via cdk deployed) dynamodb table like
table.addLocalSecondaryIndex({
indexName: "indexName",
sortKey: {
name: "keyName",
type: dynamodb.AttributeType.STRING,
},
projectionType: dynamodb.ProjectionType.INCLUDE,
nonKeyAttributes: ["attr1", "attr2"],
});
requieres a recreation of the table as seen in the cloudformation change set created by cdk:
"resourceChange": {
"logicalResourceId": "---ID---",
"action": "Modify",
"physicalResourceId": "---ID---",
"resourceType": "AWS::DynamoDB::Table",
"replacement": "True",
"moduleInfo": null,
"details": [
{
"target": {
"name": "LocalSecondaryIndexes",
"requiresRecreation": "Always",
"attribute": "Properties"
},
"causingEntity": null,
"evaluation": "Static",
"changeSource": "DirectModification"
}
],
"changeSetId": null,
"scope": [
"Properties"
]
},
"type": "Resource"
}
Why is that?
Can that be prevented somehow or is there a Workaround besides adding the index manually via the aws-console?
Cheers,
Helge
Why is that?
Changes to LocalSecondaryIndexes require replacement of DynamoDB table, since LSI can only be created on the table create time. In contrast, modifications to GlobalSecondaryIndexes lead to no interruption.
Can that be prevented somehow or is there a Workaround besides adding the index manually via the aws-console?
Sadly, there is no way to prevent this, as explained above. You can use GSI if you don't want to keep replacing your tables.
You are mistaking Global and Local secondary indexes. Local can not be created after table creation, only global can. That's the reason your code fails.
I have an existing DynamoDB table that is defined as part of a CloudFormation stack. According the the CFN AWS::DynamoDB::Table documentation the GlobalSecondaryIndexes attribute does not require replacement. It even goes into details with the following caveats.
You can delete or add one global secondary index without interruption.
As well as the following...
If you update a table to include a new global secondary index, AWS
CloudFormation initiates the index creation and then proceeds with the
stack update. AWS CloudFormation doesn't wait for the index to
complete creation because the backfilling phase can take a long time,
depending on the size of the table.
However, in practice when I attempt to perform an update I get the following error message:
CloudFormation cannot update a stack when a custom-named resource requires replacing. Rename mytablename and update the stack again.
Since I'm adding a GSI that uses a new attribute I'm forced to modify AttributeDefinitions which says it does require replacement. However, even when I try to add a GSI with only existing attributes defined in the AttributeDefinitions I still get the same error message.
Here is the snippet from my original CFN definition for my table:
{
"myTable": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"TableName": "mytablename",
"AttributeDefinitions": [
{
"AttributeName": "entryId",
"AttributeType": "S"
},
{
"AttributeName": "entryName",
"AttributeType": "S"
},
{
"AttributeName": "appId",
"AttributeType": "S"
}
],
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "entryId"
},
{
"KeyType": "RANGE",
"AttributeName": "entryName"
}
],
"ProvisionedThroughput": {
"ReadCapacityUnits": {
"Ref": "readThroughput"
},
"WriteCapacityUnits": {
"Ref": "writeThroughput"
}
},
"GlobalSecondaryIndexes": [
{
"IndexName": "appId-index",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "appId"
}
],
"Projection": {
"ProjectionType": "KEYS_ONLY"
},
"ProvisionedThroughput": {
"ReadCapacityUnits": {
"Ref": "readThroughput"
},
"WriteCapacityUnits": {
"Ref": "writeThroughput"
}
}
}
]
}
}
}
Here is what I want to update it to:
{
"myTable": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"TableName": "mytablename",
"AttributeDefinitions": [
{
"AttributeName": "entryId",
"AttributeType": "S"
},
{
"AttributeName": "entryName",
"AttributeType": "S"
},
{
"AttributeName": "appId",
"AttributeType": "S"
},
{
"AttributeName": "userId",
"AttributeType": "S"
}
],
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "entryId"
},
{
"KeyType": "RANGE",
"AttributeName": "entryName"
}
],
"ProvisionedThroughput": {
"ReadCapacityUnits": {
"Ref": "readThroughput"
},
"WriteCapacityUnits": {
"Ref": "writeThroughput"
}
},
"GlobalSecondaryIndexes": [
{
"IndexName": "appId-index",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "appId"
}
],
"Projection": {
"ProjectionType": "KEYS_ONLY"
},
"ProvisionedThroughput": {
"ReadCapacityUnits": {
"Ref": "readThroughput"
},
"WriteCapacityUnits": {
"Ref": "writeThroughput"
}
}
},
{
"IndexName": "userId-index",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "userId"
}
],
"Projection": {
"ProjectionType": "KEYS_ONLY"
},
"ProvisionedThroughput": {
"ReadCapacityUnits": {
"Ref": "readThroughput"
},
"WriteCapacityUnits": {
"Ref": "writeThroughput"
}
}
}
]
}
}
}
However, like I mentioned before even if I do not define userId in the AttributeDefinitions and use an existing attribute in a new GSI definition it does not work and fails with the same error message.
I had the same error today and got an answer from Amazon tech support. The problem is that you supplied a TableName field. CloudFormation wants to be in charge of naming your tables for you. Apparently, when you supply your own name for them, this is the error you get on a update that replaces the table (not sure why it needs to replace, but that's what the doc says)
For me, this makes CloudFormation utterly useless for maintaining my DynamoDB tables. I'd have to build in configuration so that my code could dynamically tell what the random table name was that CloudFormation generated for me.
AWS support's response to me FWIW:
Workaround A
Export data from the table to s3
Update stack with new tablename (tablename2) with gsi added
Note this losses all current entries, so definitely backup to s3 first!
Update stack again, back to using tablename1 in the dynamodb table
Import data from s3 This can be eased by using data pipelines, see
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBPipeline.html
Advantage is, app code can keep using fixed names. But updating the stack twice and exporting/importing data will take some work to automate in custom scripts.
Workaround B
Backup data
Let CloudFormation name the table
Use the AWS SDK to retrieve the table name by getting the name through describing the stack resource by logical-id, and fetching from the output the tablename there.
While I think this avoids extra stack updates (still think exporting/importing data will be required), the disadvantage is a network call in code to fetch the table name. See
* http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudFormation.html#describeStackResource-property
Again, this is a known issue support is pushing the service team on, as we know it is a quite common use case and point of pain. Please try a workaround on a test environment before testing on production.
How the issue happened here?
For me, delete the GSI manually in dynamoDB console, then add GSI by cloudformation, update-stack got this error.
Solution: remove the GSI in cloudformation, execute update-stack, then add back the GSI, execute update-stack again, works fine.
Guess cloudformation has its own cache, could not tell the change you've done manually in console.
My scenario was that I wanted to update a GSI by chnaging its range key.
- First you have to delete the GSI that you're updating, also remember to remove any AttributeDefinition that might not be needed anymore due to the removal of the GSI i.e. the index name etc. Upload the template via CloudFormation to apply the changes.
- Then add the needed Attributes and the 'updated' GSI to the template.
Backup all the data from the DynamoDB and after that, if you are using serverless, perform either of the command below:
individual remove:
node ./node_modules/serverless/bin/serverless remove
globally remove:
serverless remove
and deploy it again by running:
node ./node_modules/serverless/bin/serverless deploy -v
or
serverless deploy