Update attributes using UpdateItemEnhancedRequest - DynamoDbEnhancedClient - amazon-web-services

As per DynamoDb Java Sdk2, the Update operation can be performed as
DynamoDbTable<Customer> mappedTable = enhancedClient.table("Customer", TableSchema.fromBean(Customer.class));
Key key = Key.builder()
.partitionValue(keyVal)
.build();
Customer customerRec = mappedTable.getItem(r->r.key(key));
customerRec.setEmail(email);
mappedTable.updateItem(customerRec);
Shouldn't this will cause two calls to dynamoDB
What if after fetching the record and before updateItem call, another thread updated the record, so we have to put it into a transaction as well
Although there is another way by using UpdateItemEnhancedRequest
final var request = UpdateItemEnhancedRequest.builder(Customer.class)
.item(updatedCustomerObj)
.ignoreNulls(Boolean.TRUE)
.build();
mappedTable.updateItem(request);
but this would require using ignoreNulls(TRUE) and will not handle updates in case null value is to be set.
What should be the optimized way for update operation using enhanced client

For 1:
I don't like this part of the SDK but the "non-enhanced" client can update without first querying for the entity.
UpdateItemRequest updateItemRequest = UpdateItemRequest
.builder()
.tableName("Customer")
.key(
Map.of(
"PK", AttributeValue.builder().s("pk").build(),
"SK", AttributeValue.builder().s("sk").build()
))
.attributeUpdates(
Map.of(
"email", AttributeValueUpdate.builder()
.action(AttributeAction.PUT)
.value(AttributeValue.builder().s("new-email#gmail.com").build())
.build()
))
.build();
client.updateItem(updateItemRequest);
For 2:
Yes, you are absolutely right that can happen and will lead to data inconsistency. To avoid this you should use conditional write/update with a version. Luckily we have a Java annotation for this
#DynamoDBVersionAttribute
public Long getVersion() { return version; }
public void setVersion(Long version) { this.version = version;}
More info here

Related

Querying DynamoDb with Global Secondary Index Error

I'm new to DynamoDb and the intricacies of querying it - I understand (hopefully correctly) that I need to either have a partition Key or Global Secondary Index (GSI) in order to query against that value in the table.
I know I can use Appsync to query on a GSI by setting up a resolver - and this works. However, I have a setup using the Java AWS CDK (I'm writing in Kotlin) where I'm using Appsync and routing my queries into lambda Resolvers (so that once this works, I can do more complicated things later).
The Crux of the issue is that when I setup a Lambda to resolve my query, I end up with this Error msg: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: Query condition missed key schema element: testName returned from the Lambda.
I think these should be the key snippets..
My DynamoDbBean..
#DynamoDbBean
data class Test(
#get:DynamoDbPartitionKey var id: String = "",
#get:DynamoDbSecondaryPartitionKey(indexNames = ["testNameIndex"])
var testName: String = "",
)
Using the CDK I Created GSI on
testTable.addGlobalSecondaryIndex(
GlobalSecondaryIndexProps.builder()
.indexName("testNameIndex")
.partitionKey(
Attribute.builder()
.name("testName")
.type(AttributeType.STRING)
.build()
)
.projectionType(ProjectionType.ALL)
.build())
Then, within my Lambda I am trying to query my DynamoDb table, using a fixed value here testName = A.
My Sample data in the Test table would be like so..
{
"id" : "SomeUUID",
"testName" : "A"
}
private var client: AmazonDynamoDB = AmazonDynamoDBClientBuilder.standard().build()
private var dynamoDB: DynamoDB = DynamoDB(client)
Lambda Resolver Snippets...
val table: Table = dynamoDB.getTable(TABLE_NAME)
val index: Index = table.getIndex("testNameIndex")
...
QuerySpec().withKeyConditionExpression("testNameIndex = :testName")
.withValueMap(ValueMap().withString(":testName", "A"))
val iterator: Iterator<Item> = index.query(querySpec).iterator()
while (iterator.hasNext()) {
logger.info(iterator.next().toJSONPretty())
}
This is what results in this Error msg: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: Query condition missed key schema element: testName
Am I on the wrong lines here? I know there is some mixing of libs between the 'enhanced' Dynamo sdk and the dynamodbv2 sdk - so if there is a better way of doing this query, I'd love to know!
Thanks!
Your QuerySpec's withKeyConditionExpression is initialized wrong. You need to init it like.
not testNameIndex it should be testName
QuerySpec().withKeyConditionExpression("testName = :testName")
.withValueMap(ValueMap().withString(":testName", "A"))

Dynamically Insert/Update Item in DynamoDB With Python Lambda using event['body']

I am working on a lambda function that gets called from API Gateway and updates information in dynamoDB. I have half of this working really dynamically, and im a little stuck on updating. Here is what im working with:
dynamoDB table with a partition key of guild_id
My dummy json code im using:
{
"guild_id": "126",
"guild_name": "Posted Guild",
"guild_premium": "true",
"guild_prefix": "z!"
}
Finally the lambda code:
import json
import boto3
def lambda_handler(event, context):
client = boto3.resource("dynamodb")
table = client.Table("guildtable")
itemData = json.loads(event['body'])
guild = table.get_item(Key={'guild_id':itemData['guild_id']})
#If Guild Exists, update
if 'Item' in guild:
table.update_item(Key=itemData)
responseObject = {}
responseObject['statusCode'] = 200
responseObject['headers'] = {}
responseObject['headers']['Content-Type'] = 'application/json'
responseObject['body'] = json.dumps('Updated Guild!')
return responseObject
#New Guild, Insert Guild
table.put_item(Item=itemData)
responseObject = {}
responseObject['statusCode'] = 200
responseObject['headers'] = {}
responseObject['headers']['Content-Type'] = 'application/json'
responseObject['body'] = json.dumps('Inserted Guild!')
return responseObject
The insert part is working wonderfully, How would I accomplish a similar approach with update item? Im wanting this to be as dynamic as possible so I can throw any json code (within reason) at it and it stores it in the database. I am wanting my update method to take into account adding fields down the road and handling those
I get the follow error:
Lambda execution failed with status 200 due to customer function error: An error occurred (ValidationException) when calling the UpdateItem operation: The provided key element does not match the schema.
A "The provided key element does not match the schema" error means something is wrong with Key (= primary key). Your schema's primary key is guild_id: string. Non-key attributes belong in the AttributeUpdate parameter. See the docs.
Your itemdata appears to include non-key attributes. Also ensure guild_id is a string "123" and not a number type 123.
goodKey={"guild_id": "123"}
table.update_item(Key=goodKey, UpdateExpression="SET ...")
The docs have a full update_item example.

How to create a log-based alert using the Monitoring API?

On the documentation it say's there is a link for an example of creating a log-based alert using the Monitoring API but there's only an example how to create it using the UI.
I am having trouble building a new alertPolicy using the Java Monitoring API because there does not seem to be a Condition builder for Log-based alerts (using the Log query filter). I don't think I should be using the Absent, Threshold, or MonitoringQueryLanguage options. How can I build the correct condition?
The three condition types MetricAbsence, MetricThreshold and MonitoringQueryLanguageCondition are Conditions for metric-based alerting policies. Since we want to create a log based alert policy, according to the Condition for log-based alerting policies, the condition type must be "LogMatch".
AlertPolicy.Condition.Builder class has methods to set condition with metric-based alerting condition types but not with log-based condition type i.e., LogMatch. The class "AlertPolicy.Condition.Builder" doesn't seem to have a method to do so. Note that the log-based alert is pre-GA (Preview) feature per public doc. So various client libraries may not support this yet.
Although, when creating a new Alert Policy using projects.alertPolicies.create, we can add a condition of type "conditionMatchedLog".
So, it is recommended to use UI or above API, but not the client library until it supports the condition.
As Ashish pointed out, this is not currently available in the client library. Hopefully it will be added soon but for the time being for anyone who wants to add a log based alert condition (conditionMatchedLog) using the Java Client library, this is what the code looks like.
public static UnknownFieldSet createConditionMatchedLog(String filter) {
// filter = "\nseverity = \"NOTICE\""; example filter
UnknownFieldSet.Field logMatchString =
UnknownFieldSet.Field.newBuilder()
.addLengthDelimited(ByteString.copyFromUtf8(filter))
.build();
UnknownFieldSet logMatchFilter =
UnknownFieldSet.newBuilder()
.addField(1, logMatchString)
.build();
UnknownFieldSet.Field logMatchFilterCombine =
UnknownFieldSet.Field.newBuilder()
.addLengthDelimited(logMatchFilter.toByteString())
.build();
UnknownFieldSet logMatchCondition =
UnknownFieldSet.newBuilder()
.addField(20, logMatchFilterCombine)
.build();
return logMatchCondition;
}
When constructing the actual Alert, you also need to define the Alert Strategy.
public static UnknownFieldSet createAlertStrategyFieldSet(long periodTime) {
//Making the alertStrategy field since you need those for log based alert policies
UnknownFieldSet.Field period =
UnknownFieldSet.Field.newBuilder()
.addVarint(periodTime) //must be an long
.build();
UnknownFieldSet periodSet =
UnknownFieldSet.newBuilder()
.addField(1, period)
.build();
UnknownFieldSet.Field periodSetField =
UnknownFieldSet.Field.newBuilder()
.addLengthDelimited(periodSet.toByteString())
.build();
UnknownFieldSet periodSetFieldUp =
UnknownFieldSet.newBuilder()
.addField(1, periodSetField)
.build();
UnknownFieldSet.Field periodSetField2 =
UnknownFieldSet.Field.newBuilder()
.addLengthDelimited(periodSetFieldUp.toByteString())
.build();
UnknownFieldSet periodSetFieldUp2 =
UnknownFieldSet.newBuilder()
.addField(1, periodSetField2)
.build();
UnknownFieldSet.Field periodSetField3 =
UnknownFieldSet.Field.newBuilder()
.addLengthDelimited(periodSetFieldUp2.toByteString())
.build();
UnknownFieldSet AlertStrategy =
UnknownFieldSet.newBuilder()
.addField(21, periodSetField3)
.build();
return AlertStrategy;
}
Putting it together
UnknownFieldSet conditionMatchedLog = createConditionMatchedLog(filter);
// Construct Condition object
AlertPolicy.Condition alertPolicyCondition =
AlertPolicy.Condition.newBuilder()
.setDisplayName(med.getConditions().get(index).getDisplayName())
.setUnknownFields(conditionMatchedLog)
.build();
//Making the alertStrategy field since you need those for log based alert policies
UnknownFieldSet alertStrategy = createAlertStrategyFieldSet(convertTimePeriodFromString(med.getAlertStrategy().getNotificationRateLimit().getPeriod()));
// Build the log based alert policy
AlertPolicy alertPolicy =
AlertPolicy.newBuilder()
.setDisplayName(med.getDisplayName())
.addConditions(alertPolicyCondition)
.setCombiner(AlertPolicy.ConditionCombinerType.OR)
.setEnabled(BoolValue.newBuilder().setValue(true).build())
.setUnknownFields(alertStrategy)
.build();

Using AmazonSQSExtendedClient with AmazonSQSAsync

So the way I understand it is that through using AmazonSQSExtendedClient, the payloads that are over 256kb will get stored into a temp bucket (see the guide here). I'm having a bit of trouble implementing this with AmazonSQSAsync. Because of the way that current system is set up, I'm currently using AmazonSQSAsync, like so:
#Bean
#Primary
AmazonSQSAsync amazonSQSAsync() {
return AmazonSQSAsyncClientBuilder
.standard()
.withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration(
endpoint_url,
signin_region))
.build()
}
Now I want to replace this with the extended client to handle the payloads that are over 256kb. I have something like this so far:
#Bean
#Primary
AmazonSQS amazonSQSWithExtendedClient() {
final AmazonS3 s3 = AmazonS3ClientBuilder.defaultClient()
final BucketLifecycleConfiguration.Rule expirationRule =
new BucketLifecycleConfiguration.Rule()
expirationRule.withExpirationInDays(14).withStatus("Enabled")
final BucketLifecycleConfiguration lifecycleConfig =
new BucketLifecycleConfiguration().withRules(expirationRule)
s3.setBucketLifecycleConfiguration(temp_bucket, lifecycleConfig)
// Keep this part
final ExtendedClientConfiguration extendedClientConfig =
new ExtendedClientConfiguration()
.withLargePayloadSupportEnabled(s3, temp_bucket)
final AmazonSQS sqsExtended = new AmazonSQSExtendedClient(AmazonSQSClientBuilder
.standard()
.withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration(
endpoint_url,
signin_region))
.build(), extendedClientConfig)
return sqsExtended
}
Is there any way that I can make the extended client work with AmazonSQSAsync or at least have some workaround?
If you configure your JMS bean, you can solve this kind of problem.
Do something like that: https://craftingjava.com/blog/large-sqs-messages-jms-spring-boot/

DynamoDB: getting table description null

I need to have a query on DynamoDB.
Currently I made so far this code:
AWSCredentials creds = new DefaultAWSCredentialsProviderChain().getCredentials();
AmazonDynamoDBClient client = new AmazonDynamoDBClient(creds);
client.withRegion(Regions.US_WEST_2);
DynamoDB dynamoDB = new DynamoDB(new AmazonDynamoDBClient(creds));
Table table = dynamoDB.getTable("dev");
QuerySpec spec = new QuerySpec().withKeyConditionExpression("tableKey = :none.json");
ItemCollection<QueryOutcome> items = table.query(spec);
System.out.println(table);
The returned value of table is: {dev: null}, which means the that teh description is null.
It's important to say that while i'm using AWS CLI with this command: aws dynamodb list-tables i'm getting a result of all the tables so if i'm also making the same operation over my code dynamoDB.listTables() is retrieving empty list.
Is there something that I'm doing wrong?
Do I need to define some more credentials before using DDB API ?
I was getting the same problem and landed here looking for a solution. As mentioned in javadoc of getDesciption
Returns the table description; or null if the table description has
not yet been described via {#link #describe()}. No network call.
Initially description is set to null. After the first call to describe(), which makes a network call, description gets set and getDescription can be used after that.