I need to have a query on DynamoDB.
Currently I made so far this code:
AWSCredentials creds = new DefaultAWSCredentialsProviderChain().getCredentials();
AmazonDynamoDBClient client = new AmazonDynamoDBClient(creds);
client.withRegion(Regions.US_WEST_2);
DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient(creds));
Table table = dynamoDB.getTable("dev");
QuerySpec spec = new QuerySpec().withKeyConditionExpression("tableKey = :none.json");
ItemCollection<QueryOutcome> items = table.query(spec);
System.out.println(table);
The returned value of table is: {dev: null}, which means the that teh description is null.
It's important to say that while i'm using AWS CLI with this command: aws dynamodb list-tables i'm getting a result of all the tables so if i'm also making the same operation over my code dynamoDB.listTables() is retrieving empty list.
Is there something that I'm doing wrong?
Do I need to define some more credentials before using DDB API ?
I was getting the same problem and landed here looking for a solution. As mentioned in javadoc of getDesciption
Returns the table description; or null if the table description has
not yet been described via {#link #describe()}. No network call.
Initially description is set to null. After the first call to describe(), which makes a network call, description gets set and getDescription can be used after that.
Related
I'm new to DynamoDb and the intricacies of querying it - I understand (hopefully correctly) that I need to either have a partition Key or Global Secondary Index (GSI) in order to query against that value in the table.
I know I can use Appsync to query on a GSI by setting up a resolver - and this works. However, I have a setup using the Java AWS CDK (I'm writing in Kotlin) where I'm using Appsync and routing my queries into lambda Resolvers (so that once this works, I can do more complicated things later).
The Crux of the issue is that when I setup a Lambda to resolve my query, I end up with this Error msg: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: Query condition missed key schema element: testName returned from the Lambda.
I think these should be the key snippets..
My DynamoDbBean..
#DynamoDbBean
data class Test(
#get:DynamoDbPartitionKey var id: String = "",
#get:DynamoDbSecondaryPartitionKey(indexNames = ["testNameIndex"])
var testName: String = "",
)
Using the CDK I Created GSI on
testTable.addGlobalSecondaryIndex(
GlobalSecondaryIndexProps.builder()
.indexName("testNameIndex")
.partitionKey(
Attribute.builder()
.name("testName")
.type(AttributeType.STRING)
.build()
)
.projectionType(ProjectionType.ALL)
.build())
Then, within my Lambda I am trying to query my DynamoDb table, using a fixed value here testName = A.
My Sample data in the Test table would be like so..
{
"id" : "SomeUUID",
"testName" : "A"
}
private var client: AmazonDynamoDB = AmazonDynamoDBClientBuilder.standard().build()
private var dynamoDB: DynamoDB = DynamoDB(client)
Lambda Resolver Snippets...
val table: Table = dynamoDB.getTable(TABLE_NAME)
val index: Index = table.getIndex("testNameIndex")
...
QuerySpec().withKeyConditionExpression("testNameIndex = :testName")
.withValueMap(ValueMap().withString(":testName", "A"))
val iterator: Iterator<Item> = index.query(querySpec).iterator()
while (iterator.hasNext()) {
logger.info(iterator.next().toJSONPretty())
}
This is what results in this Error msg: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: Query condition missed key schema element: testName
Am I on the wrong lines here? I know there is some mixing of libs between the 'enhanced' Dynamo sdk and the dynamodbv2 sdk - so if there is a better way of doing this query, I'd love to know!
Thanks!
Your QuerySpec's withKeyConditionExpression is initialized wrong. You need to init it like.
not testNameIndex it should be testName
QuerySpec().withKeyConditionExpression("testName = :testName")
.withValueMap(ValueMap().withString(":testName", "A"))
I have a serverless Aurora DB on AWS RDS (with data api enabled) that I would like to query using the database resource and secret ARNs. The code to do this is shown below.
rds_data = boto3.client('rds-data', region_name='us-west-1')
response = rds_data.execute_statement(
resourceArn='DATABASE_ARN',
secretArn='DATABASE_SECRET',
database='db',
sql=query)
rds_data.close()
The response contains the returned records, but the column names are not shown and the format is interesting (see below).
'records': [[{'stringValue': 'name'}, {'stringValue': '5'}, {'stringValue': '3'}, {'stringValue': '1'}, {'stringValue': '2'}, {'stringValue': '4'}]]
I want to query the aurora database and then create a pandas dataframe. Is there a better way to do this? I looked at just using psycopg2 and creating a database connection, but I believe you need a user name and password. I would like to not go that route and instead authenticate using IAM.
The answer was actually pretty simple. There is a parameter called "formatRecordsAs" in the "execute_statement" function. If you set this to json, you can get back better records.
response = rds_data.execute_statement(
resourceArn='DATABASE_ARN',
secretArn='DATABASE_SECRET',
database='db',
formatRecordsAs='JSON',
sql=query)
This still gives you back a string, so then you just need to convert that string representation to a list of dictionaries.
list_of_dicts = ast.literal_eval(response['formattedRecords'])
That can then be changed to a pandas dataframe
df = pd.DataFrame(list_of_dicts)
As per DynamoDb Java Sdk2, the Update operation can be performed as
DynamoDbTable<Customer> mappedTable = enhancedClient.table("Customer", TableSchema.fromBean(Customer.class));
Key key = Key.builder()
.partitionValue(keyVal)
.build();
Customer customerRec = mappedTable.getItem(r->r.key(key));
customerRec.setEmail(email);
mappedTable.updateItem(customerRec);
Shouldn't this will cause two calls to dynamoDB
What if after fetching the record and before updateItem call, another thread updated the record, so we have to put it into a transaction as well
Although there is another way by using UpdateItemEnhancedRequest
final var request = UpdateItemEnhancedRequest.builder(Customer.class)
.item(updatedCustomerObj)
.ignoreNulls(Boolean.TRUE)
.build();
mappedTable.updateItem(request);
but this would require using ignoreNulls(TRUE) and will not handle updates in case null value is to be set.
What should be the optimized way for update operation using enhanced client
For 1:
I don't like this part of the SDK but the "non-enhanced" client can update without first querying for the entity.
UpdateItemRequest updateItemRequest = UpdateItemRequest
.builder()
.tableName("Customer")
.key(
Map.of(
"PK", AttributeValue.builder().s("pk").build(),
"SK", AttributeValue.builder().s("sk").build()
))
.attributeUpdates(
Map.of(
"email", AttributeValueUpdate.builder()
.action(AttributeAction.PUT)
.value(AttributeValue.builder().s("new-email#gmail.com").build())
.build()
))
.build();
client.updateItem(updateItemRequest);
For 2:
Yes, you are absolutely right that can happen and will lead to data inconsistency. To avoid this you should use conditional write/update with a version. Luckily we have a Java annotation for this
#DynamoDBVersionAttribute
public Long getVersion() { return version; }
public void setVersion(Long version) { this.version = version;}
More info here
I'm a bit new to this so I apologize if it's basic.
We have an AWS parameters store managed by terraform.
we use a model for it:
the values of the keys are not stored and each time we do terraform apply we need to enter them manually.
suppose id like to add a new key, id need to edit the tf files to add another key and when i got terraform apply i need to re enter all the values of existing keys and then enter the value for the new key.
Im looking for a way to add to terraform another key, when ill terraform apply, i want terraform to see that only new key is added so all the current existing key are not to be modified. i was thinking about passing null to the variable of the existings keys and then terraform see it is null it will disregard the parameters store and wont touch it.
resource "aws_ssm_parameter" "default" {
count = var.enabled ? 1 : 0
name = "${module.label.id}-parameter"
type = var.type
value = var.value
description = var.description
tier = var.tier
key_id = var.key_id
overwrite = var.overwrite
allowed_pattern = var.allowed_pattern
}
This is the module i’m using.
I am working on a lambda function that gets called from API Gateway and updates information in dynamoDB. I have half of this working really dynamically, and im a little stuck on updating. Here is what im working with:
dynamoDB table with a partition key of guild_id
My dummy json code im using:
{
"guild_id": "126",
"guild_name": "Posted Guild",
"guild_premium": "true",
"guild_prefix": "z!"
}
Finally the lambda code:
import json
import boto3
def lambda_handler(event, context):
client = boto3.resource("dynamodb")
table = client.Table("guildtable")
itemData = json.loads(event['body'])
guild = table.get_item(Key={'guild_id':itemData['guild_id']})
#If Guild Exists, update
if 'Item' in guild:
table.update_item(Key=itemData)
responseObject = {}
responseObject['statusCode'] = 200
responseObject['headers'] = {}
responseObject['headers']['Content-Type'] = 'application/json'
responseObject['body'] = json.dumps('Updated Guild!')
return responseObject
#New Guild, Insert Guild
table.put_item(Item=itemData)
responseObject = {}
responseObject['statusCode'] = 200
responseObject['headers'] = {}
responseObject['headers']['Content-Type'] = 'application/json'
responseObject['body'] = json.dumps('Inserted Guild!')
return responseObject
The insert part is working wonderfully, How would I accomplish a similar approach with update item? Im wanting this to be as dynamic as possible so I can throw any json code (within reason) at it and it stores it in the database. I am wanting my update method to take into account adding fields down the road and handling those
I get the follow error:
Lambda execution failed with status 200 due to customer function error: An error occurred (ValidationException) when calling the UpdateItem operation: The provided key element does not match the schema.
A "The provided key element does not match the schema" error means something is wrong with Key (= primary key). Your schema's primary key is guild_id: string. Non-key attributes belong in the AttributeUpdate parameter. See the docs.
Your itemdata appears to include non-key attributes. Also ensure guild_id is a string "123" and not a number type 123.
goodKey={"guild_id": "123"}
table.update_item(Key=goodKey, UpdateExpression="SET ...")
The docs have a full update_item example.