Lets say for some reason I want to pause consuming reading messages from my SQS ... like a service on the client side will be down for maintanence. Can I pause from my SQS Listener?
#SqsListener(value = "${aws.sqs.listener.myqueue}", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
public void processMessage(MyObj myObj) {
// do something with myObj
// if db is down then throw error and pause reading from SQS
}
I have run access some post that suggest to have a killSwitch but this is not ideal for production as it keeps consuming the messages and reposting to the queue.
Related
I have the below function handler code.
public async Task FunctionHandler(SQSEvent evnt, ILambdaContext context)
{
foreach (var message in #event.Records)
{
// Do work
// If a message processed successfully delete the SQS message
// If a message failed to process throw an exception
}
}
It is very confusing that while I don't handle validation logic for creating records in my database (already exists) I see database records with the same ID created twice meaning the same message processed more than once!
In my code, I am deleting the message after successful processing or throw an exception upon failure assuming all remained ordered messages will just go back to the queue visible for any consumer to reprocess but I can see code failing now because the same records are created twice for an event that succeeded.
Is AWS SQS FIFO exact-once delivery or am I missing some kind of retry processing policy?
This is how I delete the message upon successful processing.
var deleteMessageRequest = new DeleteMessageRequest
{
QueueUrl = _sqsQueueUrl,
ReceiptHandle = message.ReceiptHandle
};
var deleteMessageResponse =
await _amazonSqsClient.DeleteMessageAsync(deleteMessageRequest, cancellationToken);
if (deleteMessageResponse.HttpStatusCode != HttpStatusCode.OK)
{
throw new AggregateSqsProgramEntryPointException(
$"Amazon SQS DELETE ERROR: {deleteMessageResponse.HttpStatusCode}\r\nQueueURL: {_sqsQueueUrl}\r\nReceiptHandle: {message.ReceiptHandle}");
}
The documentation is very explicit about this
"FIFO queues provide exactly-once processing, which means that each
message is delivered once and remains available until a consumer
processes it and deletes it."
They also mention protecting your code from retries but that is confusing for an exactly-once delivery queue type but then I see the below in their documentation which is confusing.
Exactly-once processing.
Unlike standard queues, FIFO queues don't
introduce duplicate messages. FIFO queues help you avoid sending
duplicates to a queue. If you retry the SendMessage action within the
5-minute deduplication interval, Amazon SQS doesn't introduce any
duplicates into the queue.
Consumer retries (how's this possible)?
If the consumer detects a failed ReceiveMessage action, it can retry
as many times as necessary, using the same receive request attempt ID.
Assuming that the consumer receives at least one acknowledgement
before the visibility timeout expires, multiple retries don't affect
the ordering of messages.
This was entirely our application error and how we treat the Eventssourcing aggregate endpoints due to non-thread-safe support.
How do I configure visibility timeout so that a message in SQS can be read again?
I have Amazon SQS as a message queue. Messages are being sent by multiple applications. I am now using Spring listener to read message in queue as below:
public DefaultMessageListenerContainer jmsListenerContainer() {
SQSConnectionFactory sqsConnectionFactory = SQSConnectionFactory.builder()
.withAWSCredentialsProvider(new DefaultAWSCredentialsProviderChain())
.withEndpoint(environment.getProperty("aws_sqs_url"))
.withAWSCredentialsProvider(awsCredentialsProvider)
.withNumberOfMessagesToPrefetch(10).build();
DefaultMessageListenerContainer dmlc = new DefaultMessageListenerContainer();
dmlc.setConnectionFactory(sqsConnectionFactory);
dmlc.setDestinationName(environment.getProperty("aws_sqs_queue"));
dmlc.setMessageListener(queueListener);
return dmlc;
}
The class queueListener implements javax.jms.MessageListener which uses onMessage() method further.
I have also configured a scheduler to read the queue again after a certain period of time. It uses receiveMessage() of com.amazonaws.services.sqs.AmazonSQS.
As soon as message reach the queue the listener reads the message. I want to read the message again after certain period of time i.e. through scheduler, but once a message is read by listener it does not become visible or read again. As per Amazon's SQS developer guide the default visibility timeout is 30 seconds, but that message is not becoming visible even after 30 seconds. I have tried setting custom visibility timeout in SQS QUEUE PARAMETER CONSOLE, but it's not working.
For information, nobody is deleting the message from the queue.
I only have a passing familiarity with Amazon SQS, but I can say that typically in messaging use-cases when a consumer receives and acknowledges the message then that message is removed (i.e. deleted) from the queue. Given that your Spring application is receiving the message I would suspect it is also acknowledging the message and therefore removing it from the queue which prevents your scheduler from receiving it later. Note that Spring's DefaultMessageListenerContainer uses JMS' AUTO_ACKNOWLEDGE mode by default.
This documentation from Amazon essentially states that if a message is acknowledged in a JMS context that it is "deleted from the underlying Amazon SQS queue."
i have SQS queue setup along with consumer and producer, so i have used FIFO queue and once my consumer consume the message, it deletes the message from the queue and then my code perform some operations if any thing failed then i lost the message, so just i want to persist that message in queue and once i give acknowledgement only then delete it. please help me to how to do acknowledgement and delete on basis of acknowledgement.here is my consumer code
#SqsListener(value = "${queueName}")
public void receiveMessage(final msgDTO msgDTO,
#Header("SenderId") final String senderId,ΒΈ v) {
log.info("Received message: {}, having SenderId: {}", msgDTO, senderId);
// do some operation
if (operationSuccess) {
// TODO ACKNOWLEDGEMENT
}
}```
Listening to a AWS SQS queue, using spring cloud as follows:
#SqsListener(value = "${queue.name}", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
public void queueListener(String message, #Headers Map<String, Object> sqsHeaders) {
// code
}
Spring config:
<aws-messaging:annotation-driven-queue-listener
max-number-of-messages="10" wait-time-out="20" visibility-timeout="3600"
amazon-sqs="awsSqsClient" />
AwsSqsClient:
#Bean
public com.amazonaws.services.sqs.AmazonSQSAsyncClient awsSqsClient() {
ExecutorService executorService = Executors.newFixedThreadPool(10);
return new AmazonSQSAsyncClient(new DefaultAWSCredentialsProviderChain(), executorService);
}
This works fine.
Configured 10 threads to process these messages in SQS client as you can see above code. This is also working fine, at any point of time maximum 10 messages are processed.
The issue is, I couldn't figure-out a way to control the polling interval. By default spring polls once all threads are free.
i.e. consider the following example
Around 3 messages are delivered to Queue
Spring polls the queue and get 3 messages
3 messages are processing each message take roughly about 20 minutues
In the meantime there are around 25 messages delivered to queue. Spring is NOT polling the queue until all the 3 messages delivered earlier completed. Esentially as per example above Spring polls only after 20 minutes though there are 7 threads still free!!
Any idea how we can control this polling? i.e. Poll should start if there are any threads free and should not wait until all threads become free
Your listener can load messages into your Spring app and submit them to another thread pool along with Acknowledgement and Visibility objects (if you want to control both).
Once messages are submitted to this thread pool, your listener can load more data. You can control the concurrency by adjusting thread pool settings.
Your listener's method signature will be similar to one below:
#SqsListener(value = "${queueName}", deletionPolicy = SqsMessageDeletionPolicy.NEVER)
public void listen(YourCustomPOJO pojo,
#Headers Map<String, Object> headers,
Acknowledgment acknowledgment,
Visibility visibility) throws Exception {
...... Send pojo to worker thread and return
A worker thread then will acknowledge the successful processing
acknowledgment.acknowledge().get();
Make sure your message visibility is set to a value that is greater than your highest processing time (use some timeout to limit execution time).
I'm trying to set up Laravel 4.2 queue using AWS SQS and an EB Worker environment. I'm pushing the job into the queue from another server and I want the worker environment to execute it. But so far it looks like the worker tries to execute it, but for some reason gets a 405 error in the access log...
I'm trying to get a very simple test code... On the worker env. I pretty much clean Laravel installation just with queue config and stuff and this class:
class TestQueue {
public function fire($job, $data)
{
File::append(storage_path().'/sqs_push.txt', $data['date']);
$job->delete();
}
}
Now on the main server, from where I want to push, I have this:
public function getTestQueue(){
$data = ['date' => date('Y-m-d H:i:s')];
$queue = \Queue::push('TestQueue', $data);
var_dump($queue);
}
On the worker I have launched the
php artisan queue:listen
When I run that method, it adds it to the SQS queue (I can see it in the SQS console) and the worker tries to execute it, but all I see is some 405 errors in the access logs...
Maybe im doing something wrong in my queue setup? Can anyone help me please?
Error 405 stands for "MethodNotAllowed" where the specified method is not allowed against this. Since you have mentioned that Main Server successfully sends the messages to SQS (you have verified it via the console), I will provide a solution to implement a worker thread. This was taken from this repository in GitHub. Have a look at the worker.php file.
$queue = new Queue(QUEUE_NAME, unserialize(AWS_CREDENTIALS));
// Continuously poll queue for new messages and process them.
while (true) {
$message = $queue->receive();
if ($message) {
try {
$message->process();
$queue->delete($message);
} catch (Exception $e) {
$queue->release($message);
echo $e->getMessage();
}
} else {
// Wait 20 seconds if no jobs in queue to minimise requests to AWS API
sleep(20);
}
}