Creating dynamodb table using aws cli "--cli-input-json" - amazon-web-services

i have been trying to create a dynamo db table using the following json(testbootstraptable.json) file:
{
"AttributeDefinitions": [
{
"AttributeName": "test1",
"AttributeType": "S"
},
{
"AttributeName": "test2",
"AttributeType": "S"
}
],
"TableName": "BOOTSTRAP_TEST_TBL",
"KeySchema": [
{
"AttributeName": "test1",
"KeyType": "HASH"
},
{
"AttributeName": "test2",
"KeyType": "RANGE"
}
],
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"ReadCapacityUnits": 35,
"WriteCapacityUnits": 35
}
}
I have tried multiple times with different variations based on google search but keep getting the following error:
Error parsing parameter 'cli-input-json': Invalid JSON: Expecting value: line 1 column 1 (char 0)
JSON received: testbootstraptable.json
AWS Command:
$ aws dynamodb create-table --cli-input-json testbootstraptable.json --region us-west-2

Add "file://" to testbootstraptable.json
aws dynamodb create-table --cli-input-json file://testbootstraptable.json --region us-west-2
Also, delete the following line as it is not correct:
"NumberOfDecreasesToday": 0,

This is related to the question.
===
I initially started with JSON, wasted a lot time.
Then switched to GUI version of creating tables.
Installed & used NoSQL Workbench for DynamoDB
Download link: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/workbench.settingup.html

Related

List all AWS Elasticache snapshots taken after a specified date

I am trying to write a query in AWS CLI which will provide with the elasticache snapshots names older than a specific creation date.
I tried with a JMESPath query like:
aws elasticache describe-snapshots \
--region ap-southeast-1 \
--snapshot-source "manual" \
--query 'Snapshots[*].NodeSnapshots[?SnapshotCreateTime >`2022-10-01`] | [?not_null(node)]'
But, this is giving me an empty result.
Snippet of aws elasticache describe-snapshots:
{
"Snapshots": [{
"SnapshotName": "snapshot-name",
"ReplicationGroupId": "rep-id",
"ReplicationGroupDescription": "redis cluster",
"CacheClusterId": null,
"SnapshotStatus": "available",
"SnapshotSource": "automated",
"CacheNodeType": "cache.r6g.large",
"Engine": "redis",
"EngineVersion": "6.0.5",
"NumCacheNodes": null,
"PreferredAvailabilityZone": null,
"CacheClusterCreateTime": null,
"PreferredMaintenanceWindow": "sun:20:00-sun:20:00",
"TopicArn": null,
"Port": "6379",
"CacheParameterGroupName": "default.redis6.x.cluster.on",
"CacheSubnetGroupName": "redis-group",
"VpcId": "vpc-01bcajghfghj",
"AutoMinorVersionUpgrade": "true",
"SnapshotRetentionLimit": "18",
"SnapshotWindow": "20:00-21:00",
"NumNodeGroups": "1",
"AutomaticFailover": "enabled",
"NodeSnapshots": [{
"CacheClusterId": "redis-cluster-01",
"NodeGroupId": "001",
"CacheNodeId": "001",
"NodeGroupConfiguration": null,
"CacheSize": "20 GB",
"CacheNodeCreateTime": "1632909889675",
"SnapshotCreateTime": "1667246439000"
}],
"KmsKeyId": "kms-id.."
}]
}
If we take as an example the JSON given in the documentation:
{
"Snapshots": [
{
"SnapshotName": "automatic.my-cluster2-002-2019-12-05-06-38",
"NodeSnapshots": [
{
"CacheNodeId": "0001",
"SnapshotCreateTime": "2019-12-05T06:38:23Z"
}
]
},
{
"SnapshotName": "myreplica-backup",
"NodeSnapshots": [
{
"CacheNodeId": "0001",
"SnapshotCreateTime": "2019-11-26T00:25:01Z"
}
]
},
{
"SnapshotName": "my-cluster",
"NodeSnapshots": [
{
"CacheNodeId": "0001",
"SnapshotCreateTime": "2019-11-26T03:08:33Z"
}
]
}
]
}
Then, you can realise that you need to filter on SnapshotCreateTime, nested under the array NodeSnapshots.
So, what you need, here, is a double filter:
One to filter by date:
[?SnapshotCreateTime > `2022-10-01`]
Then, one to exclude all snapshots where the NodeSnapshots array have been emptied by the previous filter:
[?NodeSnapshots[?SnapshotCreateTime > `2022-10-01`]]
And, so, if you only care about the name of the snapshot, you could do with the query:
Snapshots[?NodeSnapshots[?SnapshotCreateTime > `2022-10-01`]].SnapshotName
So, your command ends up being:
aws elasticache describe-snapshots \
--region ap-southeast-1 \
--snapshot-source "manual" \
--query 'Snapshots[?
NodeSnapshots[?SnapshotCreateTime > `2022-10-01`]
].SnapshotName'
Now, with what you are showing as an output in your question, your issue is also coming from the fact that your SnapshotCreateTime is in epoch format in liliseconds, so you just have to convert 2022-10-01 in the right format.
If you are on Linux, you can do this within your command, with date:
aws elasticache describe-snapshots \
--region ap-southeast-1 \
--snapshot-source "manual" \
--query "Snapshots[?
NodeSnapshots[?
SnapshotCreateTime > \`$(date --date='2022-10-01' +'%s')000\`
]
].SnapshotName"

Can I use AWS CLI to create an RDS data-source in Quicksight?

I want to create a data source connecting to one of our RDS instances. I can easily create a data source to RDS through the UI but when I use AWS CLI, I only see these values as possible values for create-data-source command:
ADOBE_ANALYTICS AMAZON_ELASTICSEARCH ATHENA AURORA AURORA_POSTGRESQL AWS_IOT_ANALYTICS GITHUB JIRA MARIADB MYSQL POSTGRESQL PRESTO REDSHIFT S3 SALESFORCE SERVICENOW SNOWFLAKE SPARK SQLSERVER TERADATA TWITTER
But I do see RdsParameters in the options:
What am I missing? How can I create a data-source connecting to an RDS instance
Use the RDS database which is listed in the values.
aws quicksight create-data-source --aws-account-id 1234567890 --data-source-id "abcdefghijkl" --name "NameOfDS" --type POSTGRESQL --data-source-parameters ...
The above is for a PostGreSQL DB, which is created in RDS.
The alternate is to create a JSON file with all the required parameters and then use a command like the below
aws quicksight create-data-source --cli-input-json file://./create-data-source-cli-input.json
Output for which will be:
{
"Status": 202,
"Arn": "arn:aws:quicksight:ap-southeast-2:xxxxxxxxxxxx:datasource/sample-postgres-db",
"DataSourceId": "sample-postgres-db",
"CreationStatus": "CREATION_IN_PROGRESS",
"RequestId": "d4392bc6-77fa-4346-8e9c-09a716761c4b"
}
The format of the JSON file will be:
{
"AwsAccountId": "xxxxxxxxxxxx",
"DataSourceId": "sample-postgres-db",
"Name": "sample-postgres-db",
"Type": "POSTGRESQL",
"DataSourceParameters": {
"PostgreSqlParameters": {
"Host": "hostname.from-rds.ap-southeast-2.rds.amazonaws.com",
"Port": 5432,
"Database": "name-of-db"
}
},
"Credentials": {
"CredentialPair": {
"Username": "xxxxxxxxxxxx_postgres_admin",
"Password": "xxxxxxxxxxxx"
}
},
"Permissions": [
{
"Principal": "arn:aws:quicksight:ap-southeast-2:xxxxxxxxxxxx:user/default/alice",
"Actions": [
"quicksight:UpdateDataSourcePermissions",
"quicksight:DescribeDataSource",
"quicksight:DescribeDataSourcePermissions",
"quicksight:PassDataSource",
"quicksight:UpdateDataSource",
"quicksight:DeleteDataSource"
]
}
],
"VpcConnectionProperties": {
"VpcConnectionArn": "arn:aws:quicksight:ap-southeast-2:xxxxxxxxxxxx:vpcConnection/QuickSight-DB-VPC"
},
"Tags": [
{
"Key": "Name",
"Value": "PGSQL-TestDB"
}
]
}

dynamodb decribe-table encryption status

In the AWS Console, the DynamoDB table is having Encryption as "DEFAULT"...looking at the documentation the table may be encrypted using AWS owned CMK(Customer managed key)...
But is there a way to know for sure...that the table is encrypted? and if yes, what type of encryption is in place?
the "describe-table" command doesn't output any information about encryption.
C:\Users\test>aws dynamodb describe-table --profile snpp --table-name mydynamodbtable
{
"Table": {
"TableArn": "arn:aws:dynamodb:us-east-1:902919223373:table/mydynamodbtable",
"AttributeDefinitions": [
{
"AttributeName": "hashKey",
"AttributeType": "S"
},
{
"AttributeName": "rangeKey",
"AttributeType": "S"
}
],
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"WriteCapacityUnits": 100,
"ReadCapacityUnits": 400
},
"TableSizeBytes": 45160931,
"TableName": "mydynamodbtable",
"TableStatus": "ACTIVE",
"TableId": "0e75b671-75bf-41ac-9cd1-f75ee3f787ca",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "hashKey"
},
{
"KeyType": "RANGE",
"AttributeName": "rangeKey"
}
],
"ItemCount": 206363,
"CreationDateTime": 1529442343.583
}
}
https://aws.amazon.com/about-aws/whats-new/2018/11/amazon-dynamodb-encrypts-all-customer-data-at-rest/
Per this November 15, 2018 announcement, all DynamoDB table Data at rest are encrypted except for AWS GovCloud US-West, US-East, China Bejing, and China Ningxia regions.

AWS Cloudformation failing to acknowledge AutoScalingGroup

While using CloudFormation to create EC2 instance along with an autoscaling group, I face the error:
The following resource(s) failed to create: [WebsInstanceServerGroup].
image of CloudFormation Group output
The failure is seen while creating auto scaling group, but when I check the auto scaling group console, it says that the creation was 'successful.' (The 'in-progress' deletion happens after a 15 minute time out value from CloudFormation).
image of AutoScaling output
What could be the reason CloudFormation is not acknowledging that the AutoScale group is created successfully?
The error also says something about WebInstanceServerGroup, so I checked my template for that, but saw nothing suspicious.
"WebsInstanceServerGroup": {
"Type": "AWS::AutoScaling::AutoScalingGroup",
"Properties": {
"AvailabilityZones": {
"Fn::GetAZs": "AWS::Region"
},
"VPCZoneIdentifier": {
"Ref": "WebsELBSubnetId"
},
"LoadBalancerNames": [
{
"Ref": "WebsELB"
}
],
"LaunchConfigurationName": {
"Ref": "WebsEC2Instance"
},
"Cooldown": 300,
"HealthCheckGracePeriod": 600,
"HealthCheckType": "EC2",
"Tags": [
{
"Key": "Name",
"Value": {
"Ref": "WebsInstanceName"
},
"PropagateAtLaunch": "true"
},
{
"Key": "Service",
"Value": {
"Ref": "ServiceTag"
},
"PropagateAtLaunch": "true"
}
],
"MinSize": {
"Ref": "ASGMin"
},
"DesiredCapacity": {
"Ref": "ASGDesired"
},
"MaxSize": {
"Ref": "ASGMax"
}
},
"CreationPolicy": {
"ResourceSignal": {
"Count": {
"Ref": "ASGMin"
},
"Timeout": "PT15M"
}
}
}
Please let me know if more information is required, thanks in advance.
Looks like your EC2 instances in your autoscaling group are not sending the required success signals.
CloudFormation will wait for you to send ASGMin signals before considering your WebsInstanceServerGroup to be successfully created. So if ASGMin is set to 3, each of your 3 EC2 instances should send a signal.
To send the signal you can either use the cfn-signal helper, or with the AWS CLI:
aws cloudformation signal-resource \
--stack-name {your stack name here} \
--status SUCCESS \
--logical-resource-id WebsInstanceServerGroup \
--unique-id {the instance ID for the EC2 instance that is sending the signal}
Use this command at the end of your User Data script, when you consider your EC2 instance to be fully provisioned and ready to go.

Unable to add GSI to DynamoDB table using CloudFormation

I have an existing DynamoDB table that is defined as part of a CloudFormation stack. According the the CFN AWS::DynamoDB::Table documentation the GlobalSecondaryIndexes attribute does not require replacement. It even goes into details with the following caveats.
You can delete or add one global secondary index without interruption.
As well as the following...
If you update a table to include a new global secondary index, AWS
CloudFormation initiates the index creation and then proceeds with the
stack update. AWS CloudFormation doesn't wait for the index to
complete creation because the backfilling phase can take a long time,
depending on the size of the table.
However, in practice when I attempt to perform an update I get the following error message:
CloudFormation cannot update a stack when a custom-named resource requires replacing. Rename mytablename and update the stack again.
Since I'm adding a GSI that uses a new attribute I'm forced to modify AttributeDefinitions which says it does require replacement. However, even when I try to add a GSI with only existing attributes defined in the AttributeDefinitions I still get the same error message.
Here is the snippet from my original CFN definition for my table:
{
"myTable": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"TableName": "mytablename",
"AttributeDefinitions": [
{
"AttributeName": "entryId",
"AttributeType": "S"
},
{
"AttributeName": "entryName",
"AttributeType": "S"
},
{
"AttributeName": "appId",
"AttributeType": "S"
}
],
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "entryId"
},
{
"KeyType": "RANGE",
"AttributeName": "entryName"
}
],
"ProvisionedThroughput": {
"ReadCapacityUnits": {
"Ref": "readThroughput"
},
"WriteCapacityUnits": {
"Ref": "writeThroughput"
}
},
"GlobalSecondaryIndexes": [
{
"IndexName": "appId-index",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "appId"
}
],
"Projection": {
"ProjectionType": "KEYS_ONLY"
},
"ProvisionedThroughput": {
"ReadCapacityUnits": {
"Ref": "readThroughput"
},
"WriteCapacityUnits": {
"Ref": "writeThroughput"
}
}
}
]
}
}
}
Here is what I want to update it to:
{
"myTable": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"TableName": "mytablename",
"AttributeDefinitions": [
{
"AttributeName": "entryId",
"AttributeType": "S"
},
{
"AttributeName": "entryName",
"AttributeType": "S"
},
{
"AttributeName": "appId",
"AttributeType": "S"
},
{
"AttributeName": "userId",
"AttributeType": "S"
}
],
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "entryId"
},
{
"KeyType": "RANGE",
"AttributeName": "entryName"
}
],
"ProvisionedThroughput": {
"ReadCapacityUnits": {
"Ref": "readThroughput"
},
"WriteCapacityUnits": {
"Ref": "writeThroughput"
}
},
"GlobalSecondaryIndexes": [
{
"IndexName": "appId-index",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "appId"
}
],
"Projection": {
"ProjectionType": "KEYS_ONLY"
},
"ProvisionedThroughput": {
"ReadCapacityUnits": {
"Ref": "readThroughput"
},
"WriteCapacityUnits": {
"Ref": "writeThroughput"
}
}
},
{
"IndexName": "userId-index",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "userId"
}
],
"Projection": {
"ProjectionType": "KEYS_ONLY"
},
"ProvisionedThroughput": {
"ReadCapacityUnits": {
"Ref": "readThroughput"
},
"WriteCapacityUnits": {
"Ref": "writeThroughput"
}
}
}
]
}
}
}
However, like I mentioned before even if I do not define userId in the AttributeDefinitions and use an existing attribute in a new GSI definition it does not work and fails with the same error message.
I had the same error today and got an answer from Amazon tech support. The problem is that you supplied a TableName field. CloudFormation wants to be in charge of naming your tables for you. Apparently, when you supply your own name for them, this is the error you get on a update that replaces the table (not sure why it needs to replace, but that's what the doc says)
For me, this makes CloudFormation utterly useless for maintaining my DynamoDB tables. I'd have to build in configuration so that my code could dynamically tell what the random table name was that CloudFormation generated for me.
AWS support's response to me FWIW:
Workaround A
Export data from the table to s3
Update stack with new tablename (tablename2) with gsi added
Note this losses all current entries, so definitely backup to s3 first!
Update stack again, back to using tablename1 in the dynamodb table
Import data from s3 This can be eased by using data pipelines, see
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBPipeline.html
Advantage is, app code can keep using fixed names. But updating the stack twice and exporting/importing data will take some work to automate in custom scripts.
Workaround B
Backup data
Let CloudFormation name the table
Use the AWS SDK to retrieve the table name by getting the name through describing the stack resource by logical-id, and fetching from the output the tablename there.
While I think this avoids extra stack updates (still think exporting/importing data will be required), the disadvantage is a network call in code to fetch the table name. See
* http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudFormation.html#describeStackResource-property
Again, this is a known issue support is pushing the service team on, as we know it is a quite common use case and point of pain. Please try a workaround on a test environment before testing on production.
How the issue happened here?
For me, delete the GSI manually in dynamoDB console, then add GSI by cloudformation, update-stack got this error.
Solution: remove the GSI in cloudformation, execute update-stack, then add back the GSI, execute update-stack again, works fine.
Guess cloudformation has its own cache, could not tell the change you've done manually in console.
My scenario was that I wanted to update a GSI by chnaging its range key.
- First you have to delete the GSI that you're updating, also remember to remove any AttributeDefinition that might not be needed anymore due to the removal of the GSI i.e. the index name etc. Upload the template via CloudFormation to apply the changes.
- Then add the needed Attributes and the 'updated' GSI to the template.
Backup all the data from the DynamoDB and after that, if you are using serverless, perform either of the command below:
individual remove:
node ./node_modules/serverless/bin/serverless remove
globally remove:
serverless remove
and deploy it again by running:
node ./node_modules/serverless/bin/serverless deploy -v
or
serverless deploy