dynamodb decribe-table encryption status - amazon-web-services

In the AWS Console, the DynamoDB table is having Encryption as "DEFAULT"...looking at the documentation the table may be encrypted using AWS owned CMK(Customer managed key)...
But is there a way to know for sure...that the table is encrypted? and if yes, what type of encryption is in place?
the "describe-table" command doesn't output any information about encryption.
C:\Users\test>aws dynamodb describe-table --profile snpp --table-name mydynamodbtable
{
"Table": {
"TableArn": "arn:aws:dynamodb:us-east-1:902919223373:table/mydynamodbtable",
"AttributeDefinitions": [
{
"AttributeName": "hashKey",
"AttributeType": "S"
},
{
"AttributeName": "rangeKey",
"AttributeType": "S"
}
],
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"WriteCapacityUnits": 100,
"ReadCapacityUnits": 400
},
"TableSizeBytes": 45160931,
"TableName": "mydynamodbtable",
"TableStatus": "ACTIVE",
"TableId": "0e75b671-75bf-41ac-9cd1-f75ee3f787ca",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "hashKey"
},
{
"KeyType": "RANGE",
"AttributeName": "rangeKey"
}
],
"ItemCount": 206363,
"CreationDateTime": 1529442343.583
}
}

https://aws.amazon.com/about-aws/whats-new/2018/11/amazon-dynamodb-encrypts-all-customer-data-at-rest/
Per this November 15, 2018 announcement, all DynamoDB table Data at rest are encrypted except for AWS GovCloud US-West, US-East, China Bejing, and China Ningxia regions.

Related

How we can use same Tags into two AWS::DynamoDB::Table with in Cloudformation Template

I'm trying to create Amazon DynamoDB tables using Cloud Formation Template. So my question is can I Used same Tags in multiple Tables using Ref.
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"Status": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"AttributeDefinitions": [
{
"AttributeName": "SId",
"AttributeType": "S"
}
],
"KeySchema": [
{
"AttributeName": "SId",
"KeyType": "HASH"
}
],
"ProvisionedThroughput": {
"ReadCapacityUnits": "1",
"WriteCapacityUnits": "1"
},
"TableName": "Statuscf",
"Tags": [
{
"Key": "Application",
"Value": "BFMS"
},
{
"Key": "Name",
"Value": "EventSourcingDataStore"
}
]
}
},
"BMSHSData": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"TableName": "Billing.FmsDatacf",
"Tags": [{"Ref":"/Status/Tags"}]
}
}
}
Please suggest me how I can used same tags in another table. I am using Like this "Tags": [{"Ref":"/Status/Tags"}].
The only way to do this using plain CloudFormation is by copy-and-paste. So you have to replicate your tags for all tables "manually". The only automated solution would be through development of CloduFormation macro or custom resource. Yet another choice could be through the use of nested stacks.
To resolved this problem, Just need to pass Tags Properties in the Parameter Section of CF.
Then used those tags in DynamoDB like this

Trying to get the route table ID in AWS cli using the tag name as a filter

How can I get the AWS route table ID using a tag name as the filter?
The tag name I want to look for is - eksctl-live-cluster/PublicRouteTable
In the below example, the end result is that I would want to get the command to return the id of "rtb-0b6d5359a281c6fd9"
Using the below command I can get all the info for all the route tables in my VPC. I have tried adding tags and names in the query part unsuccessfully and played around with --filter. I just want to get the ID for one table that uses the name "eksctl-live-cluster/PublicRouteTable".
aws ec2 describe-route-tables --filters "Name=vpc-id,Values=vpc-0a75516801dc9a130" --query "RouteTables[]"
The name I want to use in the search is - eksctl-live-cluster/PublicRouteTable
Here is the output of all the route tables when i use the first command -
[
{
"Associations": [
{
"AssociationState": {
"State": "associated"
},
"RouteTableAssociationId": "rtbassoc-07ef991c747ba58a5",
"Main": true,
"RouteTableId": "rtb-0ad0dde171cc946c9"
}
],
"RouteTableId": "rtb-0ad0dde171cc946c9",
"VpcId": "vpc-0a75516801dc9a130",
"PropagatingVgws": [],
"Tags": [],
"Routes": [
{
"GatewayId": "local",
"DestinationCidrBlock": "10.170.0.0/16",
"State": "active",
"Origin": "CreateRouteTable"
}
],
"OwnerId": "000000000"
},
{
"Associations": [
{
"SubnetId": "subnet-0e079eb96b85fc72c",
"AssociationState": {
"State": "associated"
},
"RouteTableAssociationId": "rtbassoc-062f19d9175f4f596",
"Main": false,
"RouteTableId": "rtb-0b6d5359a281c6fd9"
},
{
"SubnetId": "subnet-0b1fae931da8c9d8f",
"AssociationState": {
"State": "associated"
},
"RouteTableAssociationId": "rtbassoc-0a22d395d0b6196ac",
"Main": false,
"RouteTableId": "rtb-0b6d5359a281c6fd9"
}
],
"RouteTableId": "rtb-0b6d5359a281c6fd9",
"VpcId": "vpc-0a75516801dc9a130",
"PropagatingVgws": [],
"Tags": [
{
"Value": "live",
"Key": "eksctl.cluster.k8s.io/v1alpha1/cluster-name"
},
{
"Value": "live",
"Key": "alpha.eksctl.io/cluster-name"
},
{
"Value": "0.29.2",
"Key": "alpha.eksctl.io/eksctl-version"
},
{
"Value": "PublicRouteTable",
"Key": "aws:cloudformation:logical-id"
},
{
"Value": "eksctl-live-cluster",
"Key": "aws:cloudformation:stack-name"
},
{
"Value": "eksctl-live-cluster/PublicRouteTable",
"Key": "Name"
},
{
"Value": "arn:aws:cloudformation:us-east-1:000000000:stack/eksctl-live-cluster/ef543610-3981-11eb-abcc-0af655d000e7",
"Key": "aws:cloudformation:stack-id"
}
],
"Routes": [
{
"GatewayId": "local",
"DestinationCidrBlock": "10.170.0.0/16",
"State": "active",
"Origin": "CreateRouteTable"
},
{
"GatewayId": "igw-072414b2b1d313970",
"DestinationCidrBlock": "0.0.0.0/0",
"State": "active",
"Origin": "CreateRoute"
}
],
"OwnerId": "996762160"
}
]
This should return what you're looking for:
aws ec2 describe-route-tables --filters 'Name=tag:Name,Values=eksctl-live-cluster/PublicRouteTable' --query 'RouteTables[].Associations[].RouteTableId'
In general you can filter with tags using the tag:<tag name> construct. I'm not sure what a / value will do.
tag :key- The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner and the value TeamA , specify tag:Owner for the filter name and TeamA for the filter value.
If you want to further filter it by VPC id, you can add on to the filter like this:
aws ec2 describe-route-tables --filters 'Name=tag:Name,Values=eksctl-live-cluster/PublicRouteTable' Name=vpc-id,Values=<VPC ID> --query 'RouteTables[].Associations[].RouteTableId'
This line:
aws ec2 --profile prod --region eu-west-1 describe-route-tables --filters Name=tag:Name,Values=private-route-table-eu-west-1b --query 'RouteTables[].Associations[].RouteTableId'
Returned the following because my route table is associated with two subnets:
[
"rtb-04d4b860",
"rtb-04d4b860"
]
If you need a unique output you could pipe this all through jq :
|jq -r .[] |sort |uniq
References
describe-route-tables

How to get cost for each EC2, not total cost for all EC2 from AWS API

I'm studying AWS api to retrieve requisite information about my EC2 instances.
So, I'm on AWS Cost Explorer Service.
It has function 'GetCostAndUsage' that, for example, sends information below. (this is an example from official AWS document)
{
"TimePeriod": {
"Start":"2017-09-01",
"End": "2017-10-01"
},
"Granularity": "MONTHLY",
"Filter": {
"Dimensions": {
"Key": "SERVICE",
"Values": [
"Amazon Simple Storage Service"
]
}
},
"GroupBy":[
{
"Type":"DIMENSION",
"Key":"SERVICE"
},
{
"Type":"TAG",
"Key":"Environment"
}
],
"Metrics":["BlendedCost", "UnblendedCost", "UsageQuantity"]
}
and retrieve information below. (this is an example from official AWS document)
{
"GroupDefinitions": [
{
"Key": "SERVICE",
"Type": "DIMENSION"
},
{
"Key": "Environment",
"Type": "TAG"
}
],
"ResultsByTime": [
{
"Estimated": false,
"Groups": [
{
"Keys": [
"Amazon Simple Storage Service",
"Environment$Prod"
],
"Metrics": {
"BlendedCost": {
"Amount": "39.1603300457",
"Unit": "USD"
},
"UnblendedCost": {
"Amount": "39.1603300457",
"Unit": "USD"
},
"UsageQuantity": {
"Amount": "173842.5440074444",
"Unit": "N/A"
}
}
},
{
"Keys": [
"Amazon Simple Storage Service",
"Environment$Test"
],
"Metrics": {
"BlendedCost": {
"Amount": "0.1337464807",
"Unit": "USD"
},
"UnblendedCost": {
"Amount": "0.1337464807",
"Unit": "USD"
},
"UsageQuantity": {
"Amount": "15992.0786663399",
"Unit": "N/A"
}
}
}
],
"TimePeriod": {
"End": "2017-10-01",
"Start": "2017-09-01"
},
"Total": {}
}
]
}
The retrieved data in key 'Metrics' I guess, it is total cost. not each.
So, How can I get each usage and cost for each EC2 instance??
This was way harder than I had imagined so I'm sharing in case someone else needs it.
aws ce get-cost-and-usage \
--filter file://filters.json \
--time-period Start=2021-08-01,End=2021-08-14 \
--granularity DAILY \
--metrics "BlendedCost" \
--group-by Type=TAG,Key=Name
Contents of filters.json:
{
"Dimensions": {
"Key": "SERVICE",
"Values": [
"Amazon Elastic Compute Cloud - Compute"
]
}
}
--- Available Metrics ---
AmortizedCost
BlendedCost
NetAmortizedCost
NetUnblendedCost
NormalizedUsageAmount
UnblendedCost
UsageQuantity
Descriptions for most metrics except for usage: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ce-advanced.html
Know this question is old, but you will need to use the GetCostAndUsageWithResources call, as opposed to GetCostAndUsage.
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ce/get-cost-and-usage-with-resources.html
It's going to be difficult to associate an exact cost with each instance - simple example, you have 2 instances of the same size - one reserved and one on-demand - you run both for 1/2 the month and then turn off one of them for the second 1/2 of the month.
You will pay for a reserved instance for the entire month and an on-demand instance for 1/2 the month - but which instance was reserved and which was on-demand? You can't tell; the concept of a reserved instance is just a billing concept, and is not associated with a particular instance.
You might be able to approximate what you are looking for - but there are limitations.
You can use tags to track the cost of resources. In the case of EC2 you can assign tags like Project: myprojcet or Application: myapp and in cost explorer then filter the expenses by tags and use the tag that has been put to track the expenses. If the instance at some point was covered by a reservation plan, the tag will only show you the cost of the period in which your expenses were not covered.

Creating dynamodb table using aws cli "--cli-input-json"

i have been trying to create a dynamo db table using the following json(testbootstraptable.json) file:
{
"AttributeDefinitions": [
{
"AttributeName": "test1",
"AttributeType": "S"
},
{
"AttributeName": "test2",
"AttributeType": "S"
}
],
"TableName": "BOOTSTRAP_TEST_TBL",
"KeySchema": [
{
"AttributeName": "test1",
"KeyType": "HASH"
},
{
"AttributeName": "test2",
"KeyType": "RANGE"
}
],
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"ReadCapacityUnits": 35,
"WriteCapacityUnits": 35
}
}
I have tried multiple times with different variations based on google search but keep getting the following error:
Error parsing parameter 'cli-input-json': Invalid JSON: Expecting value: line 1 column 1 (char 0)
JSON received: testbootstraptable.json
AWS Command:
$ aws dynamodb create-table --cli-input-json testbootstraptable.json --region us-west-2
Add "file://" to testbootstraptable.json
aws dynamodb create-table --cli-input-json file://testbootstraptable.json --region us-west-2
Also, delete the following line as it is not correct:
"NumberOfDecreasesToday": 0,
This is related to the question.
===
I initially started with JSON, wasted a lot time.
Then switched to GUI version of creating tables.
Installed & used NoSQL Workbench for DynamoDB
Download link: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/workbench.settingup.html

Unable to add GSI to DynamoDB table using CloudFormation

I have an existing DynamoDB table that is defined as part of a CloudFormation stack. According the the CFN AWS::DynamoDB::Table documentation the GlobalSecondaryIndexes attribute does not require replacement. It even goes into details with the following caveats.
You can delete or add one global secondary index without interruption.
As well as the following...
If you update a table to include a new global secondary index, AWS
CloudFormation initiates the index creation and then proceeds with the
stack update. AWS CloudFormation doesn't wait for the index to
complete creation because the backfilling phase can take a long time,
depending on the size of the table.
However, in practice when I attempt to perform an update I get the following error message:
CloudFormation cannot update a stack when a custom-named resource requires replacing. Rename mytablename and update the stack again.
Since I'm adding a GSI that uses a new attribute I'm forced to modify AttributeDefinitions which says it does require replacement. However, even when I try to add a GSI with only existing attributes defined in the AttributeDefinitions I still get the same error message.
Here is the snippet from my original CFN definition for my table:
{
"myTable": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"TableName": "mytablename",
"AttributeDefinitions": [
{
"AttributeName": "entryId",
"AttributeType": "S"
},
{
"AttributeName": "entryName",
"AttributeType": "S"
},
{
"AttributeName": "appId",
"AttributeType": "S"
}
],
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "entryId"
},
{
"KeyType": "RANGE",
"AttributeName": "entryName"
}
],
"ProvisionedThroughput": {
"ReadCapacityUnits": {
"Ref": "readThroughput"
},
"WriteCapacityUnits": {
"Ref": "writeThroughput"
}
},
"GlobalSecondaryIndexes": [
{
"IndexName": "appId-index",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "appId"
}
],
"Projection": {
"ProjectionType": "KEYS_ONLY"
},
"ProvisionedThroughput": {
"ReadCapacityUnits": {
"Ref": "readThroughput"
},
"WriteCapacityUnits": {
"Ref": "writeThroughput"
}
}
}
]
}
}
}
Here is what I want to update it to:
{
"myTable": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"TableName": "mytablename",
"AttributeDefinitions": [
{
"AttributeName": "entryId",
"AttributeType": "S"
},
{
"AttributeName": "entryName",
"AttributeType": "S"
},
{
"AttributeName": "appId",
"AttributeType": "S"
},
{
"AttributeName": "userId",
"AttributeType": "S"
}
],
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "entryId"
},
{
"KeyType": "RANGE",
"AttributeName": "entryName"
}
],
"ProvisionedThroughput": {
"ReadCapacityUnits": {
"Ref": "readThroughput"
},
"WriteCapacityUnits": {
"Ref": "writeThroughput"
}
},
"GlobalSecondaryIndexes": [
{
"IndexName": "appId-index",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "appId"
}
],
"Projection": {
"ProjectionType": "KEYS_ONLY"
},
"ProvisionedThroughput": {
"ReadCapacityUnits": {
"Ref": "readThroughput"
},
"WriteCapacityUnits": {
"Ref": "writeThroughput"
}
}
},
{
"IndexName": "userId-index",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "userId"
}
],
"Projection": {
"ProjectionType": "KEYS_ONLY"
},
"ProvisionedThroughput": {
"ReadCapacityUnits": {
"Ref": "readThroughput"
},
"WriteCapacityUnits": {
"Ref": "writeThroughput"
}
}
}
]
}
}
}
However, like I mentioned before even if I do not define userId in the AttributeDefinitions and use an existing attribute in a new GSI definition it does not work and fails with the same error message.
I had the same error today and got an answer from Amazon tech support. The problem is that you supplied a TableName field. CloudFormation wants to be in charge of naming your tables for you. Apparently, when you supply your own name for them, this is the error you get on a update that replaces the table (not sure why it needs to replace, but that's what the doc says)
For me, this makes CloudFormation utterly useless for maintaining my DynamoDB tables. I'd have to build in configuration so that my code could dynamically tell what the random table name was that CloudFormation generated for me.
AWS support's response to me FWIW:
Workaround A
Export data from the table to s3
Update stack with new tablename (tablename2) with gsi added
Note this losses all current entries, so definitely backup to s3 first!
Update stack again, back to using tablename1 in the dynamodb table
Import data from s3 This can be eased by using data pipelines, see
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBPipeline.html
Advantage is, app code can keep using fixed names. But updating the stack twice and exporting/importing data will take some work to automate in custom scripts.
Workaround B
Backup data
Let CloudFormation name the table
Use the AWS SDK to retrieve the table name by getting the name through describing the stack resource by logical-id, and fetching from the output the tablename there.
While I think this avoids extra stack updates (still think exporting/importing data will be required), the disadvantage is a network call in code to fetch the table name. See
* http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudFormation.html#describeStackResource-property
Again, this is a known issue support is pushing the service team on, as we know it is a quite common use case and point of pain. Please try a workaround on a test environment before testing on production.
How the issue happened here?
For me, delete the GSI manually in dynamoDB console, then add GSI by cloudformation, update-stack got this error.
Solution: remove the GSI in cloudformation, execute update-stack, then add back the GSI, execute update-stack again, works fine.
Guess cloudformation has its own cache, could not tell the change you've done manually in console.
My scenario was that I wanted to update a GSI by chnaging its range key.
- First you have to delete the GSI that you're updating, also remember to remove any AttributeDefinition that might not be needed anymore due to the removal of the GSI i.e. the index name etc. Upload the template via CloudFormation to apply the changes.
- Then add the needed Attributes and the 'updated' GSI to the template.
Backup all the data from the DynamoDB and after that, if you are using serverless, perform either of the command below:
individual remove:
node ./node_modules/serverless/bin/serverless remove
globally remove:
serverless remove
and deploy it again by running:
node ./node_modules/serverless/bin/serverless deploy -v
or
serverless deploy