Using AmazonSQSExtendedClient with AmazonSQSAsync - amazon-web-services

So the way I understand it is that through using AmazonSQSExtendedClient, the payloads that are over 256kb will get stored into a temp bucket (see the guide here). I'm having a bit of trouble implementing this with AmazonSQSAsync. Because of the way that current system is set up, I'm currently using AmazonSQSAsync, like so:
#Bean
#Primary
AmazonSQSAsync amazonSQSAsync() {
return AmazonSQSAsyncClientBuilder
.standard()
.withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration(
endpoint_url,
signin_region))
.build()
}
Now I want to replace this with the extended client to handle the payloads that are over 256kb. I have something like this so far:
#Bean
#Primary
AmazonSQS amazonSQSWithExtendedClient() {
final AmazonS3 s3 = AmazonS3ClientBuilder.defaultClient()
final BucketLifecycleConfiguration.Rule expirationRule =
new BucketLifecycleConfiguration.Rule()
expirationRule.withExpirationInDays(14).withStatus("Enabled")
final BucketLifecycleConfiguration lifecycleConfig =
new BucketLifecycleConfiguration().withRules(expirationRule)
s3.setBucketLifecycleConfiguration(temp_bucket, lifecycleConfig)
// Keep this part
final ExtendedClientConfiguration extendedClientConfig =
new ExtendedClientConfiguration()
.withLargePayloadSupportEnabled(s3, temp_bucket)
final AmazonSQS sqsExtended = new AmazonSQSExtendedClient(AmazonSQSClientBuilder
.standard()
.withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration(
endpoint_url,
signin_region))
.build(), extendedClientConfig)
return sqsExtended
}
Is there any way that I can make the extended client work with AmazonSQSAsync or at least have some workaround?

If you configure your JMS bean, you can solve this kind of problem.
Do something like that: https://craftingjava.com/blog/large-sqs-messages-jms-spring-boot/

Related

Update attributes using UpdateItemEnhancedRequest - DynamoDbEnhancedClient

As per DynamoDb Java Sdk2, the Update operation can be performed as
DynamoDbTable<Customer> mappedTable = enhancedClient.table("Customer", TableSchema.fromBean(Customer.class));
Key key = Key.builder()
.partitionValue(keyVal)
.build();
Customer customerRec = mappedTable.getItem(r->r.key(key));
customerRec.setEmail(email);
mappedTable.updateItem(customerRec);
Shouldn't this will cause two calls to dynamoDB
What if after fetching the record and before updateItem call, another thread updated the record, so we have to put it into a transaction as well
Although there is another way by using UpdateItemEnhancedRequest
final var request = UpdateItemEnhancedRequest.builder(Customer.class)
.item(updatedCustomerObj)
.ignoreNulls(Boolean.TRUE)
.build();
mappedTable.updateItem(request);
but this would require using ignoreNulls(TRUE) and will not handle updates in case null value is to be set.
What should be the optimized way for update operation using enhanced client
For 1:
I don't like this part of the SDK but the "non-enhanced" client can update without first querying for the entity.
UpdateItemRequest updateItemRequest = UpdateItemRequest
.builder()
.tableName("Customer")
.key(
Map.of(
"PK", AttributeValue.builder().s("pk").build(),
"SK", AttributeValue.builder().s("sk").build()
))
.attributeUpdates(
Map.of(
"email", AttributeValueUpdate.builder()
.action(AttributeAction.PUT)
.value(AttributeValue.builder().s("new-email#gmail.com").build())
.build()
))
.build();
client.updateItem(updateItemRequest);
For 2:
Yes, you are absolutely right that can happen and will lead to data inconsistency. To avoid this you should use conditional write/update with a version. Luckily we have a Java annotation for this
#DynamoDBVersionAttribute
public Long getVersion() { return version; }
public void setVersion(Long version) { this.version = version;}
More info here

Lettuce API getting into periodic TimeoutException Issues

We have 10 node (5 Masters, 5 Read Replicas) AWS Redis Cluster. We use Lettuce API. We are using Lettuce API non pool configuration and Async calls. Almost once every week we get into issue where we get continuous TimoutExceptions for few minutes. We are expecting this as a network issue, but networking team has found no issue with network. What can be possible solution?
private LettuceConfigWithoutPool(RedisClusterConfig pool) {
if (lettuceConfigWithoutPoolInstance != null) {
throw new RuntimeException("Use getInstance() method to get the single instance of this class.");
}
List<RedisURI> redisURIS = new RedisURIBuilder.Builder(pool)
.withPassword()
.withTLS()
.build();
ClusterTopologyRefreshOptions clusterTopologyRefreshOptions = new ClusterTopologyBuilder.Builder(pool)
.setAdaptiveRefreshTriggersTimeoutInMinutes()
.build();
ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder()
.topologyRefreshOptions(clusterTopologyRefreshOptions)
.build();
RedisClusterClient redisClusterClient = ClusterClientProvider.buildClusterClient(redisURIS, clusterClientOptions);
StatefulRedisClusterConnection<String, Object> statefulRedisClusterConnection = redisClusterClient.connect(new SerializedObjectCodec());
statefulRedisClusterConnection.setReadFrom(ReadFromArgumentProvider.getReadFromArgument(pool.getReadFrom()));
this.command = statefulRedisClusterConnection.async();
}

S3 CopyObjectRequest between regions

I'm trying to copy some objects between 2 S3 buckets that are in different regions.
I have this:
static void PutToDestination(string filename)
{
var credentials = new Amazon.Runtime.BasicAWSCredentials(awsAccessKeyId, awsSecretKey);
var client = new Amazon.S3.AmazonS3Client(credentials, Amazon.RegionEndpoint.GetBySystemName(awsS3RegionNameSource));
CopyObjectRequest request = new CopyObjectRequest();
request.SourceKey = filename;
request.DestinationKey = filename;
request.SourceBucket = awsS3BucketNameSource;
request.DestinationBucket = awsS3BucketNameDest;
try
{
CopyObjectResponse response = client.CopyObject(request);
}
catch (AmazonS3Exception x)
{
Console.WriteLine(x);
}
}
I get "The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint."
There doesn't seem to be a way to set separate endpoints for source and destination.
Is there a different method I should look at?
Thanks
Did you try this? https://github.com/aws/aws-sdk-java-v2/issues/2212 (Provide destination region when building the S3:Client). Worked for me :)
I think, if you don't specify the region explicitly, then Copy between cross region should work.
See documentation.
However, the accelaration will not work and copy will be
COPY=GET+PUT
Here is explanation excerpt from documentation that says-
Important
Amazon S3 Transfer Acceleration does not support cross-region copies.
In your code, you are specifying the region like below.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
Instead initialize S3client without region like below.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider()).build();

Publish a json message to AWS SNS topic using C#

I Am trying to publish a Json Message to AWS SNS topic from my C# Application using AWS SDk. Its [enter image description here][1]populating message in string format and message attribute filed is not populated.
Code sample is as below:
var snsClient = new AmazonSimpleNotificationServiceClient(accessId, secretrkey, RegionEndpoint.USEast1);
PublishRequest publishReq = new PublishRequest()
{
TargetArn = topicARN,
MessageStructure = "json",
Message = JsonConvert.SerializeObject(message)
};
var msgAttributes = new Dictionary<string, MessageAttributeValue>();
var msgAttribute = new MessageAttributeValue();
msgAttribute.DataType = "String";
msgAttribute.StringValue = "123";
msgAttributes.Add("Objectcd", msgAttribute);
publishReq.MessageAttributes = msgAttributes;
PublishResponse response = snsClient.Publish(publishReq);
Older question but answering as I came across when dealing with similar issue
When you set the MessageStructure to "json".
The json must contain at least a top-level JSON key of "default" with a value that is a string.
So json needs to look like
{
"default" : "my message"
}
My solution looks something like
var messageDict = new Dictionary<string,object>()
messageDict["default"] = "my message";
PublishRequest publishReq = new PublishRequest()
{
TargetArn = topicARN,
MessageStructure = "json",
Message = JsonConvert.SerializeObject(messageDict)
};
// if json is an object
// then
messageDict["default"] = JsonConvert.SerializeObject(myMessageObject);
I'm am using PublishAsync on v3
From the documentation
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/SNS/TPublishRequest.html
Message structure
Gets and sets the property MessageStructure.
Set MessageStructure to json if you want to send a different message for each protocol. For example, using one publish action, you can send a short message to your SMS subscribers and a longer message to your email subscribers. If you set MessageStructure to json, the value of the Message parameter must:
be a syntactically valid JSON object; and
contain at least a top-level JSON key of "default" with a value that is a string.
You can define other top-level keys that define the message you want to send to a specific transport protocol (e.g., "http").
Valid value: json
Great coincidence!
I was just busy writing a C# implementation to publish a message to SNS when I stumbled up on this post. Hopefully this helps you.
The messageBody argument we pass down to PublishMessageAsync is a string, it can be deserialized JSON for example.
public class SnsClient : ISnsClient
{
private readonly IAmazonSimpleNotificationService _snsClient;
private readonly SnsOptions _snsOptions; // You can inject any options you want here.
public SnsClient(IOptions<SnsOptions> snsOptions, // I'm using the IOptionsPattern as I have the TopicARN defined in the appsettings.json
IAmazonSimpleNotificationService snsClient)
{
_snsOptions = snsOptions.Value;
_snsClient = snsClient;
}
public async Task<PublishResponse> PublishMessageAsync(string messageBody)
{
return await _snsClient.PublishAsync(new PublishRequest
{
TopicArn = _snsOptions.TopicArn,
Message = messageBody
});
}
}
Also note the above setup uses Dependency Injection, so it would require you to set up an ISnsClient and you register an instance when bootstrapping the application, something as following:
services.TryAddSingleton<ISnsClient, SnsClient>();

How iterate for s3 file keys via CompletableFuture in aws sdk 2.0?

Consider example about sync version and old aws sdk:
public void syncIterateObjects() {
AmazonS3 s3Client = null;
String marker = null;
do {
ObjectListing objects = s3Client.listObjects(
new ListObjectsRequest()
.withBucketName("bucket")
.withPrefix("prefix")
.withMarker(marker)
.withDelimiter("/")
.withMaxKeys(100)
);
marker = objects.getNextMarker();
} while (marker != null);
}
Everything is clear - do/while do the work. Consider async example and awsd sdk 2.0
public void asyncIterateObjects() {
S3AsyncClient client = S3AsyncClient.builder().build()
final CompletableFuture<ListObjectsV2Response> response = client.listObjectsV2(ListObjectsV2Request.builder()
.delimiter("/")
.prefix("bucket")
.bucket("prefix")
.build())
.thenApply(Function.identity());
// what to do next ???
}
Ok I got CompletableFuture, but how run cycle to pass marker (nextContinuationToken in aws sdk 2.0) between previous and next Future?
You have only one future, notice the type is a future list of objects.
now you have to decide if you want to get the future or apply further transformations to it before getting it. After you get the future you can use the same method you used before with the while