I create a queue through the Amazon MQ web console (using ActiveMQ). The queue will be automatically deleted if there is no message on it for awhile. I'd like to know how to prevent an empty queue from being deleted on AWS.
I installed ActiveMQ on my PC. After I had created a queue, it is never deleted automatically. Actually this is default behavior of ActiveMQ queue.
Related
currently we are using Artemis for our publish-subscribe pattern and at one specific case, we are using temporary queues to subscribe to a topic. They are non-shared and non-durable and they receive messages as long as there is a consumer listening from those. As a simple example, application instances are using local cache for a configuration and if that configuration is changed, an event is published, each instance receives the same message and evict their local caches. Each instance is connecting with temporary queue (names created by broker with UUID) at startup, and they may be restarted because of a deployment or rescheduling on kubernetes (as they are running on spot instances). Is it possible to migrate this usage to AWS services using SNS and SQS?
So far, I could only see virtual queues close to this, but as far as I understand they do not receive same message on different virtual queues (of one standard queue). If I have to use standard queues to subscribe for each instance, then I would need to use unique names for instances but there may be also scaling up and then scale down, so application needs to detect queues that do not have consumers anymore and remove them (so they do not receive messages from topic anymore).
I have made some trials with virtual queues where I have created two consumer threads (receiving messages with AmazonSQSVirtualQueuesClient) and send message to host queue (with AmazonSQSClient). They did not end up on virtual queues, in fact messages are still on the queue at the moment. I have also tried to send the message with AmazonSQSVirtualQueuesClient but then get warning WARNING: Orphaned message sent to ... . I believe it is only fit for request-responder pattern and the exact destination needs to be known by publisher.
I have two services, one is the producer (Service A) and one is a consumer (Service B). So Service A will produce a message which will be published to Amazon SQS service and then it will be delivered to Service B as it has subscribed to the queue. So, this works fine until I have a single instance of Service B.
But when I start another instance of Service B, such that now there are 2 instances of Service B, both of which are subscribing to the same queue, as it is the same service, I observe that the messages from SQS are now being delivered in round-robin fashion. Such that at a given time, only one instance of Service B receives the message that is published by Service A. I want that when a message is published to this queue, it should be received by all the instances of Service B.
How can we do this? I have developed these services as Springboot applications, along with Spring cloud dependencies.
Please see the diagram below for reference.
If you are interested in building functionality like this, use SNS, not SQS. We have a Spring BOOT example that shows how to build a web app that lets users sign up for email subscriptions and then when a message is published, all subscribed emails get the message.
The purpose of this example is to get you up and running building a Spring BOOT app using the Amazon Simple Notification Service. That is, you can build this app with Spring BOOT and the official AWS Java V2 API:
Creating a Publish/Subscription Spring Boot Application
While your message may appear to be read in a round robbin fashion, they are not actually consumed in a round robin. SQS works by making all messages available to any consumer (that has the appropriate IAM permissions) and hides the message as soon as one consumer fetches the message for a pre-configured amount of time that you can configure, effectively "locking" that message. The fact that all of your consumer seem to be operating in a round robin way is most likely coincidental.
As others have mentioned you could use SNS instead of SQS to fanout messages to multiple consumers at once, but that's not as simple a setup as it may sound. If your service B is load balanced, the HTTP endpoint subscriber will point to the Load Balancer's DNS name, and thus only one instance will get the message. Assuming your instances have a public IP, you could modify your app so that it self-registers as an HTTP subscriber to the topic when the application wakes up. The downsides here are that you're not only bypassing your Load Balancer, you're also losing the durability guarantees that come with SQS since an SNS topic will try to send the message X times, but will simply drop the message after that.
An alternative solution would be to change the message hiding timeout setting on the SQS queue to 0, that way the message is never locked and every consumer will be able to read it. That will also mean you'll need to modify your application to a) not process messages twice, as the same message will likely be read more than once by the time it has finished processing and b) handle failure gracefully when one of the instance deletes the message from the queue and other instances try to delete that message from the queue after that.
Alternatively, you might want to use some sort of service mesh, or service discovery mechanism so that instances can communicate between each other in a peer-to-peer fashion so that one instance can pull the message from the SQS queue and propagate it to the other instances of the service.
You could also use a distributed store like Redis or DynamoDB to persist the messages and their current status so that every instance can read them, but only one instance will ever insert a new row.
Ultimately there's a few solutions out there for this, but without understanding the use-case properly it's hard to make a hard recommendation.
Implement message fanout using Amazon Simple Notification Service (SNS) and Amazon Simple Queue Service (SQS). There is a hands-on Getting Started example of this.
Here's how it works: in the fanout model, service A publishes a message to an SNS topic. Each instance of service B has an associated SQS queue which is subscribed to that SNS topic. The published message is delivered to each subscribed queue and hence to each instance of service B.
I have an application that uses AWS SQS with Lambda to process the messages pushed on the Queue. The Lambda keeps on polling the Queue, and when a new message appears it process the message.
For this scenario, is it possible to replace the SQS with Kafka on the AWS. In other words, can we use Kafka as a Queue for this use case?
You absolutely can. Have a look at AWS Amazon Managed Streaming for Apache Kafka (Amazon MSK)
. It's a managed service for Apache Kafka.
As for lambda triggers, unfortunately it's not a built in trigger. You can easily replicate the behaviour by using a periodically triggered lambda function that checks if the messsages are visible and then invokes the function that will process the message or processes the message directly. For some direction you can refer this official guide which sets up a similar pipeline, but for AWS MQ.
I am getting familiar with queue services in Amazon.
SQS is pull based not push based, so I have to have an EC2 instance pulling out the messages from the queue.
Are those instances EC2 AMI VM? or when I created an sqs queue ... do I have to associate to a special EC2 instance?
Why we can lose an EC2 instance when they are reading queues?
Any computer on the Internet can make a ReceiveMessage() API call. This could be an Amazon EC2 instance, or an AWS Lambda function, or a container or even the computer under your desk.
The typical architecture is that some 'worker' code is running somewhere, and it polls the Amazon SQS queue to ask for a message. If a message is available, the worker then processes the message and then deletes the message.
So, simply include the code to 'pull' the message within the program that will process the message.
Is there a way to auto delete SQS queues entirely. I have a solution wherein a server on startup creates an SQS and subscribes to SNS topic.
However there maybe scenarios wherein the server crashes and is irrecoverable. In such cases, I would replace the server with a different one which would create its own queue on startup. Now the earlier queue is not going to be used anymore.
Is there a way to for the queue to get auto-deleted with me going and deleting it explicitly (maybe like if the queue remains empty for 5 days, it gets auto deleted or some other alternative)?
At the moment, AWS SQS does not provide a mechanism to automatically delete a queue when it is empty for a certain number of days. Even I feel like this is a needed feature. But there are ways to tackle this problem.
Mentioned below, are a few ways to delete the AWS SQS according to the scenario in your question. You can select which suits you the most.
Maintain a small Database Table which keeps the mapping between the Server IP and Queue URL. You can insert values into this table, when a server starts itself. Maintain a Cloudwatch Rule that will invoke a Lambda, which will go through the values in the table to see if the server is running or not (probably by a heartbeat). If a particular server is not running, simply get the related SQS URL and delete that specific Queue. (I have suggested Lambda here because it is cheap)
sqs.deleteQueue(new DeleteQueueRequest(myQueueUrl));
Whenever a Server is started, it can send an Email to a person with the Server IP and SQS URL using SNS. Using a CloudWatch Rule, invoke a Lambda from time to time, and get all Instances, and check if any instance is down. If an instance is down, send an email to the relevant person using SNS, emailing that this server is down. It is semi-automatic, where the use can manually delete the queue after seeing the email.
Simply let the Empty queues be on its own. There is no limit as to how many queues can be made inside AWS. So why bother to delete them if that process is hard. Simply create new Queue as you go along. (See: No Max number of SQS Queue Limitation)
There is no method to auto-delete queues. You could use tags to mark resources connected (i.e. tag queues or other resources with their respective instance-id on creation), and use a simple script that would read the said tag and delete if that instance-id does not exist.
here is how to do it on cli:
https://docs.aws.amazon.com/cli/latest/reference/sqs/tag-queue.html
(I'm assuming that by server you mean EC2 instance. Ip could also be used)