Glassfish - JMS Request/Response - message doesn't go on queue - web-services

I'm trying to implement a web service in Glassfish 3.1.2, using the included OpenMQ JMS queue, that implements a synchronous JMS Request-Response using Temporary queuing for the response. It sends a message that is picked up off the main queue by a remote client job (runs outside of container), and receives back a response on the temporary queue.
In a basic Java POC, this works. But once I put the server-side code into the container, it doesn't work.
I turned off the job so that the messages would just go to the queue and not be picked up, and I follow the queue with QBrowser.
If I simply send the message from the producer, it gets onto the queue and could be read by the job.
But once I add in the code to receive() the response, the message is not readable on the queue. QBrowser says that there is 1 message on the queue, but it is marked UnAck and the queue appears empty (e.g. message is not readable).
connectionFactory and requestQueue are injected as #Resource from glassfish. Main queue is defined in glassfish.
Web Service innards:
connection = connectionFactory .createConnection();
connection.start();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer producer = session.createProducer(requestQueue);
producer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
MyObject myObj=new MyObject();
Message message=session.createObjectMessage(myObj);
TemporaryQueue responseQueue = session.createTemporaryQueue();
MessageConsumer consumer = session.createConsumer(responseQueue);
message.setJMSReplyTo(responseQueue);
producer.send(message);
//if I comment out the next line, the message appears on the queue. If I leave it in, it will behave as described above.
Message response=consumer.receive();
I've tried various approaches, including separate connections and sessions and asynchronous consumer, and attempted a Transacted session for the producer but only got stacktraces when trying to commit.
What am I missing to make this get to the queue properly?
Thanks in advance!
Edit: Domain.xml references for ConnectionFactory and Queue:
<connector-connection-pool description="Connection factory for job processing" name="jms/MyJobs"
resource-adapter-name="jmsra" connection-definition-name="javax.jms.ConnectionFactory"
transaction-support=""></connector-connection-pool>
<connector-resource pool-name="jms/MyJobs" jndi-name="jms/MyJobs"></connector-resource>
<admin-object-resource res-adapter="jmsra" res-type="javax.jms.Queue"
description="Queue to request a job process" jndi-name="jms/MyJobRequest">
<property name="Name" value="MyJobRequest"></property>
</admin-object-resource>
[...]
<resource-ref ref="jms/MyJobs"></resource-ref>
<resource-ref ref="jms/MyJobRequest"></resource-ref>

Turned out to be a Transactional issue.
Got around it by adding a new method:
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Throwable.class)
private void sendMessage(MessageProducer producer, Message message) throws Exception{
producer.send(message);
}

Related

Is AWS SQS FIFO queue really exact-once delivery

I have the below function handler code.
public async Task FunctionHandler(SQSEvent evnt, ILambdaContext context)
{
foreach (var message in #event.Records)
{
// Do work
// If a message processed successfully delete the SQS message
// If a message failed to process throw an exception
}
}
It is very confusing that while I don't handle validation logic for creating records in my database (already exists) I see database records with the same ID created twice meaning the same message processed more than once!
In my code, I am deleting the message after successful processing or throw an exception upon failure assuming all remained ordered messages will just go back to the queue visible for any consumer to reprocess but I can see code failing now because the same records are created twice for an event that succeeded.
Is AWS SQS FIFO exact-once delivery or am I missing some kind of retry processing policy?
This is how I delete the message upon successful processing.
var deleteMessageRequest = new DeleteMessageRequest
{
QueueUrl = _sqsQueueUrl,
ReceiptHandle = message.ReceiptHandle
};
var deleteMessageResponse =
await _amazonSqsClient.DeleteMessageAsync(deleteMessageRequest, cancellationToken);
if (deleteMessageResponse.HttpStatusCode != HttpStatusCode.OK)
{
throw new AggregateSqsProgramEntryPointException(
$"Amazon SQS DELETE ERROR: {deleteMessageResponse.HttpStatusCode}\r\nQueueURL: {_sqsQueueUrl}\r\nReceiptHandle: {message.ReceiptHandle}");
}
The documentation is very explicit about this
"FIFO queues provide exactly-once processing, which means that each
message is delivered once and remains available until a consumer
processes it and deletes it."
They also mention protecting your code from retries but that is confusing for an exactly-once delivery queue type but then I see the below in their documentation which is confusing.
Exactly-once processing.
Unlike standard queues, FIFO queues don't
introduce duplicate messages. FIFO queues help you avoid sending
duplicates to a queue. If you retry the SendMessage action within the
5-minute deduplication interval, Amazon SQS doesn't introduce any
duplicates into the queue.
Consumer retries (how's this possible)?
If the consumer detects a failed ReceiveMessage action, it can retry
as many times as necessary, using the same receive request attempt ID.
Assuming that the consumer receives at least one acknowledgement
before the visibility timeout expires, multiple retries don't affect
the ordering of messages.
This was entirely our application error and how we treat the Eventssourcing aggregate endpoints due to non-thread-safe support.

Visibility timeout in Amazon SQS not working

How do I configure visibility timeout so that a message in SQS can be read again?
I have Amazon SQS as a message queue. Messages are being sent by multiple applications. I am now using Spring listener to read message in queue as below:
public DefaultMessageListenerContainer jmsListenerContainer() {
SQSConnectionFactory sqsConnectionFactory = SQSConnectionFactory.builder()
.withAWSCredentialsProvider(new DefaultAWSCredentialsProviderChain())
.withEndpoint(environment.getProperty("aws_sqs_url"))
.withAWSCredentialsProvider(awsCredentialsProvider)
.withNumberOfMessagesToPrefetch(10).build();
DefaultMessageListenerContainer dmlc = new DefaultMessageListenerContainer();
dmlc.setConnectionFactory(sqsConnectionFactory);
dmlc.setDestinationName(environment.getProperty("aws_sqs_queue"));
dmlc.setMessageListener(queueListener);
return dmlc;
}
The class queueListener implements javax.jms.MessageListener which uses onMessage() method further.
I have also configured a scheduler to read the queue again after a certain period of time. It uses receiveMessage() of com.amazonaws.services.sqs.AmazonSQS.
As soon as message reach the queue the listener reads the message. I want to read the message again after certain period of time i.e. through scheduler, but once a message is read by listener it does not become visible or read again. As per Amazon's SQS developer guide the default visibility timeout is 30 seconds, but that message is not becoming visible even after 30 seconds. I have tried setting custom visibility timeout in SQS QUEUE PARAMETER CONSOLE, but it's not working.
For information, nobody is deleting the message from the queue.
I only have a passing familiarity with Amazon SQS, but I can say that typically in messaging use-cases when a consumer receives and acknowledges the message then that message is removed (i.e. deleted) from the queue. Given that your Spring application is receiving the message I would suspect it is also acknowledging the message and therefore removing it from the queue which prevents your scheduler from receiving it later. Note that Spring's DefaultMessageListenerContainer uses JMS' AUTO_ACKNOWLEDGE mode by default.
This documentation from Amazon essentially states that if a message is acknowledged in a JMS context that it is "deleted from the underlying Amazon SQS queue."

Delete some messages from the AWS SQS queue before polling

I have one Node.js application that pool messages from an AWS SQS queue and process these messages.
However some of these messages are not relevant to my service and they will be filtered out and not processed.
I am wondering if I can do this filtering stuffs from AWS before receiving these irrelevant messages ...
For example if the message does not have the following attribute data.name than this message will be deleted before reaching my application ...
Filtering these messages before sending them to to the queue is not possible (according to my client).
No that is not possible without polling the message itself. So you would need some other consumer polling the messages and returning them to queue (not calling DeleteMessage on the received delete handle) if they meet your requirements but that would be overkill in most of the cases, depending on the ratio of "good" and "bad" messages but still you would have to process "good" messages twice.
Better way would be to set up additional consumer and 2 queues. Producer sends messages to the first queue which is polled by the first consumer whose sole purpose is to filter messages and to send "good" messages to the second queue which would then be polled by your current consumer application. But again, this is much more costly.
If you can't filter messages before sending them to queue then filter them in your consuming application or you will have to pay some extra for this extra functionality.

synchronous activemq webservice

I have a webservice (Restful) that send a message through ActiveMQ, and synchronously receive the response by creating a temporary listener in the same request.
The problem is, the listener wait for response of synchronous process , but never die. I need that listener receive response, and immediately stop the listener once is responded the request of webservice.
I have a great problem, because for each request of web services, a listener is created and this is active, producing overhead.
That code in the link is not production grade - simply an example how to make a "hello world" request reply.
Here is some psuedo code to deal with consuming responses blocking - and closing the consumer afterwards.
MessageConsumer responseConsumer = session.createConsumer(tempDest);
Messages response = responseConsumer.receive(waitTimeout);
// TODO handle msg
responseConsumer.close();
Temp destinations in JMS are pretty slow anyways. You can instead use JMSCorrelationID and make the replies go to a "regular queue" handled by a single consumer for all replies. That way, you need some thread handling code to hand over the message to the web service thread, but it will be non blocking and very fast.

Spray, Akka and Apache Kafka Producer

I am creating a simple REST-Api with Spray/Akka to receive a json message and pass it to a Apache Kafka producer. The Apache Kafka producer is a non-blocking API to send messages to the Kafka message broker and is thread-safe (should be shared by all threads).
My basic architecture is the following (pseudo code) in the routing trait
val myKafkaProducerActor = system.actorOf(Props[KafkaProducerActor])
val route = {
path("message") {
get {
entity(as[String]) { message =>
myKafkaProducerActor ! message
}
}
}
That is, I use always one single actor (myKafkaProducerActor) to forward the message, since that actor only contains very minimal checks (check if is a json document at all) and hand it over immediately to the non-blocking message producer api.
My concern is now:
Does it make sense at all to forward the message to a separate actor (the kafka producer is non-blocking, I have only separated it due to the validity checks, which are cheap currently, although).
How does the default akka message reliability affect spray (at most once delivery). Is it only theoretical, since the message is forwarded on the same jvm ? Is it better to not use any followup actors at all and accept a small performance penalty but have a greater reliability ?
Thanks.