I am using EC2 and SQS, how can i make two separate sqs queue in same ec2 instance. Let's say the case like - I am using sqs to process my task in queue and each task takes little long time and suddenly i got a requirement where i have to process 50k queue process which will take minimum 1 weak, here i want to make a new queue thread for this 50k messages so that it should not let other coming queue to wait until it get processed. so that main thread dont get delay for new coming messaged
You question doesn't quite read correctly because SQS Queues do not belong to EC2 instances, Queues are created at an account level, and EC2 instance can use the AWSSDK client to create queues as needed.
From what you are saying, one approach to handle a sudden burst of messages in a queue would be to keep the messages in 1 queue, and define an EC2 Auto Scaling Group configured to scale up and down EC2 instance base on the queue length. See here for instructions
Alternatively, if this queue has messages that need to be separated because a back pressure of one message type shouldn't impact the other, then you should create multiple queues (either using the console or SDK) and poll these independently. You could poll from multiple threads, poll from 1 thread and fan the work out to multiple threads, poll from multiple processed, or use completely different EC2 instances to poll from. You have a lot of options open to you here.
Related
I have an SQS queue which contains messages that need not be consumed in order. This queue is mostly for decoupling purpose. I have 2 EC2 hosts that I would want to poll this queue. The processing of each message takes time. While one of my EC2 instance is processing a message, can my other EC2 poll the next message from the queue?
If this cannot be done, then is using an SQS an incorrect approach here? Should I instead configure an autoscaling group of EC2 instances and load balance the incoming requests among the EC2 instances?
Yes it is possible, when a instance grabs the message it is put in " Messages in flight" status. this is not available to other instances polling the queue.
Efectly reserving that message for that consumer.
more info here https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/features-capabilities.html
currently we are using Artemis for our publish-subscribe pattern and at one specific case, we are using temporary queues to subscribe to a topic. They are non-shared and non-durable and they receive messages as long as there is a consumer listening from those. As a simple example, application instances are using local cache for a configuration and if that configuration is changed, an event is published, each instance receives the same message and evict their local caches. Each instance is connecting with temporary queue (names created by broker with UUID) at startup, and they may be restarted because of a deployment or rescheduling on kubernetes (as they are running on spot instances). Is it possible to migrate this usage to AWS services using SNS and SQS?
So far, I could only see virtual queues close to this, but as far as I understand they do not receive same message on different virtual queues (of one standard queue). If I have to use standard queues to subscribe for each instance, then I would need to use unique names for instances but there may be also scaling up and then scale down, so application needs to detect queues that do not have consumers anymore and remove them (so they do not receive messages from topic anymore).
I have made some trials with virtual queues where I have created two consumer threads (receiving messages with AmazonSQSVirtualQueuesClient) and send message to host queue (with AmazonSQSClient). They did not end up on virtual queues, in fact messages are still on the queue at the moment. I have also tried to send the message with AmazonSQSVirtualQueuesClient but then get warning WARNING: Orphaned message sent to ... . I believe it is only fit for request-responder pattern and the exact destination needs to be known by publisher.
I have this problem where we may have different SLOs (service level objectives) based on request. Some requests we want to process within 5 minutes, and some requests we can take longer to process (like 2 hours etc).
I was going to use Amazon SQS as a way to queue up the messages that need to be processed, and then use Auto-scaling to increase resources in order to process within allotted SLO. For example, if 1 machine can process 1 request every 10 seconds, then within 5 minutes I can process 30 messages. If I detect in the queue that the number of messages is > 30, I should spawn another machine to meet 5-minute SLO demand.
Similarly, if I have a 2-hour SLO, I can have a backlog as large as 720 before I need to scale up.
Based on this, I can't really place these different SLOs into the same queue, because then they will interfere with each other.
Possible approaches I was considering:
Have an SQS queue for each SLO, and auto-scale accordingly.
Have multiple message groups (one for each SLO), and then auto-scale based on message group.
Is (2) possible, I couldn't find documentation on that? If they are both possible, what are the pros and cons to each?
If you have messages to process with different priorities, the normal method is:
Create 2 Amazon SQS queues: One for high-priority messages, another for 'normal' messages
Have the workers pull from the high-priority queue first. If it is empty, pull from the other queue.
However, this means that 'normal' messages might never get processed if there are always messages in the high-priority queue, so you could instead have a certain number of workers pulling 'high then normal', and other workers just pulling from 'normal'.
The absolutely better way would be to process the messages with AWS Lambda functions. The default concurrency limit of 1000 can be increased on request. AWS Lambda would take care of all scaling and capacity issues and would likely be a cheaper option since there is no cost for idle time.
I have a multi-region ECS Fargate, running 2 tasks in 1 cluster per region. Totally I have 4 tasks, 2 in us-east-1 and 2 in us-west-1.
The purpose of the ECS consumer tasks is to process messages as and when messages are available in SQS.
SQS will be configured in just a single region. The SQS arn will be configured in the container running the tasks.
With this setup, when there are messages from SQS, how does the traffic gets distributed across all available ECS tasks across multi-region? Is it random ? Someone please clarify.
I am not configuring load balancers for the ECS task since I do not have external calls. The source is always the messages from SQS.
With this setup, when there are messages from SQS, how does the traffic gets distributed across all available ECS tasks across multi-region? Is it random ? Someone please clarify
It's not random, but it is arbitrary. Here is what the docs say:
Standard queues provide best-effort ordering which ensures that messages are generally delivered in the same order as they're sent.
The reason that it's arbitrary is because SQS queues are distributed across multiple nodes and you have no idea how many nodes there are. So if SQS decides that you need 20 nodes to handle the rate that messages are added to the queue, and you retrieve 10 messages at a time (the limit), clearly you're going to get messages from some subset of those nodes.
Going into the realm of complete speculation, long polling might improve your chances of getting messages in the order that they were sent, because it is documented to "quer[y] all of the servers for messages." Of course, that could only apply when you can't fill your response from a single server. I would expect it to grab all messages that it can from each server and return as soon as it hits the maximum number of messages, even if it hasn't actually queried all servers.
SQS will be configured in just a single region. The SQS arn will be configured in the container running the tasks.
Beware that you need the queue URL, not its ARN, in order to retrieve messages.
Beware also that -- at least with the Python SDK -- you need to configure your SQS client's region to match the region where the queue exists (even though you pass the URL, which contains the region).
I am looking at using SNS-SQS services to deliver updates to machines running the same service. Since the plan is not for machines to communicate with each other, I was planning on creating a SQS for each machine (SQS would be created at startup).
I am however, not sure how to use a Dead-Letter Queue (DLQ) in such case. Should each SQS have its own DLQ or can I have common one which is shared across my SQS in the region? The concern I have with former approach is too many queues would be created (2x machines) and the concern with later is potential multiple copies of same message in the queue.
What is the best practice and recommended approach when using multiple SQS queues?
I wouldn't be concerned with the number of queues - they don't cost anything - so it really depends on how you plan on using the items in the dead-letter queue. I'll make the assumption that you will have some sort of process to review items in the DLQ to figure out why they were not processed before expiring.
Without knowing the details of what you plan to do, I would think a single DLQ would be better, and if you need to periodically process DLQ records, the processing app/system only needs to monitor that single queue.
Can't see the advantage of multiple DLQs in this case, at least based on your question.
As you are planning on doing a fanout process, having multiple queues is not a harm as long as they are used for asynchronous processing. Else a single queue is preferred. A fanout process in generally used when you want to process a few tasks concurrently by dividing it among several queues and working on them separately. (to read more about fanout)
The purpose of a Dead Letter Queue (DLQ) is to store messages that cannot be completed successfully by a certain process. Unless your process has a major fault, the number of elements that will be stored in a DLQ should be very less. Therefore it is okay to go ahead and use one DLQ for all other SQSs.
Having multiple DLQs will bring an overhead, where several processes will have to poll the DLQs for failed messages. Having just one DLQ can reduce this overhead.
It is recommended to use multiple DLQs if you want to store different categories of the failed messages.