How can I get current queue in emberJs? - ember.js

tell me please, how can I debug queues in Ember.js and get current queue with "debugger;"?

You can get current queue by inspecting:
Ember.run.currentRunLoop.queues
You'll notice you have many queues there:
Object {sync: Queue, actions: Queue, routerTransitions: Queue, render:
Queue, afterRender: Queue…}
You have to expand each property which is a Queue, for example, actions and see if it has _queueBeingFlushed property defined. If yes, then it is current Queue.
Example of _queueBeingFlushed for actions Queue:
_queueBeingFlushed: Array[4]
0: null
1: ()
2: undefined
3: undefined
length: 4
When you know that you can also filter Ember.run.currentRunLoop.queues and get current Queue programatically.

Related

How to block AWS SQS FIFO while having a message in the deadletter queue?

Imagine the following lifetime of an Order.
Order is Paid
Order is Approved
Order is Completed
We chose to use an SQS FIFO to ensure all these messages are processed in the order they are produced, to avoid for example changing the status of an order to Approved only after it was Paid and not after has been Completed.
But let's say that there is an error while trying to Approve an order, and after several attempts the message will be moved to the Deadletter queue.
The problem we noticed is the subsequent message, that is "Order is completed", it is processed, even though the previous message, "Approved", it is in the deadletter queue.
How we should handle this?
Should we check the contents of deadletter queue for having messages with the same MessageGroupID as the consuming one, assuming we could do this?
Is there a mechanism that we are missing?
You don't have to block the queue, but rather only add messages when they are good to be processed. If you have a flow in which certain steps (messages to be processed) depend on the success of previous steps, you should make these messages only be added after the previous step (message) succeeds.
Let's say you have an SQS queue with a handler to process the message, it could look something like below.
Since you're using the same FIFO queue for all steps, I'm using the STEP as the MessageGroupId to allow messages of different steps to be processed in parallel (as they could belong to different orders), but the steps of one particular order are always processed in sequence and require the previous one to succeed.
On a side note, you shouldn't need FIFO queues with the approach below, and you could have separate queues with separate handlers for each step.
const sqs = new AWS.SQS();
const STEPS = {
ORDER_PAID: "ORDER_PAID",
ORDER_APPROVED: "ORDER_APPROVED",
ORDER_COMPLETED: "ORDER_COMPLETED",
};
async function sendMessage(step: string, orderId: string) {
return sqs
.sendMessage({
QueueUrl: process.env.YOUR_QUEUE_URL || "",
MessageGroupId: step,
MessageBody: JSON.stringify({
currentStep: step,
orderId,
}),
})
.promise();
}
exports.handler = async function (event: any, context: any) {
for (const message of event.Records) {
const { currentStep, orderId } = JSON.parse(message.body);
if (currentStep === STEPS.ORDER_PAID) {
// process order paid, then add next step back to queue
await sendMessage(STEPS.ORDER_APPROVED, orderId);
}
if (currentStep === STEPS.ORDER_APPROVED) {
// process order approved, then add next step back to queue
await sendMessage(STEPS.ORDER_COMPLETED, orderId);
}
if (currentStep === STEPS.ORDER_COMPLETED) {
// process order completed
}
}
return context.logStreamName;
};

Amazon Java SQS Client: How can I selectively delete a message from the queue?

I have a Spring Boot class the receives messages from a (currently) FIFO SQS queue like so:
ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest()
.withQueueUrl(queueUrl)
.withMaxNumberOfMessages(numMessages);
Map<String, String> messageMap = new HashMap<>();
try {
List<Message> messages = sqsClient.receiveMessage(receiveMessageRequest).getMessages();
if (!messages.isEmpty()) {
if (messages.size() == 1) {
Message message = messages.get(0);
String messageBody = message.getBody();
String receiptHandle = message.getReceiptHandle();
// snipped
}
}
}
I want the ability to "skip around" messages and find only a particular message to remove from this queue. My lead is certain this can be done, but I have doubts. These are my thoughts:
If I change to a Standard Queue, can this be done?
I see you have to receive a message to get the receiptHandle for the DeleteMessageRequest.
But if I receive a message I want processed, not the message to delete, how do I put it
back in the queue?
Do I extend the visibilityTimeout to let the message be picked up later?
yes, exactly as you described: receive the message, extract the receipt handle, submit a delete message request
yes
by simply not doing anything, the message will automatically pop back up in the queue after its visibility timeout expires. Note that even such a basic receive increases the receive counter and may push the message into a dlq depending on your configuration
no, extending the visibility timeout will only delay further processing even more

Processing AWS Lambda messages in Batches

I am wondering something, and I really can't find information about it. Maybe it is not the way to go but, I would just like to know.
It is about Lambda working in batches. I know I can set up Lambda to consume batch messages. In my Lambda function I iterate each message, and if one fails, Lambda exits. And the cycle starts again.
I am wondering about slightly different approach
Let's assume I have three messages: A, B and C. I also take them in batches. Now if the message B fails (e.g. API call failed), I return message B to SQS and keep processing the message C.
Is it possible? If it is, is it a good approach? Because I see that I need to implement some extra complexity in Lambda and what not.
Thanks
There's an excellent article here. The relevant parts for you are...
Using a batchSize of 1, so that messages succeed or fail on their own.
Making sure your processing is idempotent, so reprocessing a message isn't harmful, outside of the extra processing cost.
Handle errors within your function code, perhaps by catching them and sending the message to a dead letter queue for further processing.
Calling the DeleteMessage API manually within your function after successfully processing a message.
The last bullet point is how I've managed to deal with the same problem. Instead of returning errors immediately, store them or note that an error has occurred, but then continue to handle the rest of the messages in the batch. At the end of processing, return or raise an error so that the SQS -> lambda trigger knows not to delete the failed messages. All successful messages will have already been deleted by your lambda handler.
sqs = boto3.client('sqs')
def handler(event, context):
failed = False
for msg in event['Records']:
try:
# Do something with the message.
handle_message(msg)
except Exception:
# Ok it failed, but allow the loop to finish.
logger.exception('Failed to handle message')
failed = True
else:
# The message was handled successfully. We can delete it now.
sqs.delete_message(
QueueUrl=<queue_url>,
ReceiptHandle=msg['receiptHandle'],
)
# It doesn't matter what the error is. You just want to raise here
# to ensure the trigger doesn't delete any of the failed messages.
if failed:
raise RuntimeError('Failed to process one or more messages')
def handle_msg(msg):
...
For Node.js, check out https://www.npmjs.com/package/#middy/sqs-partial-batch-failure.
const middy = require('#middy/core')
const sqsBatch = require('#middy/sqs-partial-batch-failure')
const originalHandler = (event, context, cb) => {
const recordPromises = event.Records.map(async (record, index) => { /* Custom message processing logic */ })
return Promise.allSettled(recordPromises)
}
const handler = middy(originalHandler)
.use(sqsBatch())
Check out https://medium.com/#brettandrews/handling-sqs-partial-batch-failures-in-aws-lambda-d9d6940a17aa for more details.
As of Nov 2019, AWS has introduced the concept of Bisect On Function Error, along with Maximum retries. If your function is idempotent this can be used.
In this approach you should throw an error from the function even if one item in the batch is failing. AWS with split the batch into two and retry. Now one half of the batch should pass successfully. For the other half the process is continued till the bad record is isolated.
Like all architecture decisions, it depends on your goal and what you are willing to trade for more complexity. Using SQS will allow you to process messages out of order so that retries don't block other messages. Whether or not that is worth the complexity depends on why you are worried about messages getting blocked.
I suggest reading about Lambda retry behavior and Dead Letter Queues.
If you want to retry only the failed messages out of a batch of messages it is totally doable, but does add slight complexity.
A possible approach to achieve this is iterating through a list of your events (ex [eventA, eventB, eventC]), and for each execution, append to a list of failed events if the event failed. Then, have an end case that checks to see if the list of failed events has anything in it, and if it does, manually send the messages back to SQS (using SQS sendMessageBatch).
However, you should note that this puts the events to the end of the queue, since you are manually inserting them back.
Anything can be a "good approach" if it solves a problem you are having without much complexity, and in this case, the issue of having to re-execute successful events is definitely a problem that you can solve in this manner.
SQS/Lambda supports reporting batch failures. How it works is within each batch iteration, you catch all exceptions, and if that iteration fails add that messageId to an SQSBatchResponse. At the end when all SQS messages have been processed, you return the batch response.
Here is the relevant docs section: https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#services-sqs-batchfailurereporting
To use this feature, your function must gracefully handle errors. Have your function logic catch all exceptions and report the messages that result in failure in batchItemFailures in your function response. If your function throws an exception, the entire batch is considered a complete failure.
To add to the answer by David:
SQS/Lambda supports reporting batch failures. How it works is within each batch iteration, you catch all exceptions, and if that iteration fails add that messageId to an SQSBatchResponse. At the end when all SQS messages have been processed, you return the batch response.
Here is the relevant docs section: https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#services-sqs-batchfailurereporting
I implemented this, but a batch of A, B and C, with B failing, would still mark all three as complete. It turns out you need to explicitly define the lambda event source mapping to expect a batch failure to be returned. It can be done by adding the key of FunctionResponseTypes with the value of a list containing ReportBatchItemFailures. Here is the relevant docs: https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#services-sqs-batchfailurereporting
My sam template looks like this after adding this:
Type: SQS
Properties:
Queue: my-queue-arn
BatchSize: 10
Enabled: true
FunctionResponseTypes:
- ReportBatchItemFailures

How to avoid receiving messages multiple times from a ServcieBus Queue when using the WebJobs SDK

I have got a WebJob with the following ServiceBus handler using the WebJobs SDK:
[Singleton("{MessageId}")]
public static async Task HandleMessagesAsync([ServiceBusTrigger("%QueueName%")] BrokeredMessage message, [ServiceBus("%QueueName%")]ICollector<BrokeredMessage> queue, TextWriter logger)
{
using (var scope = Program.Container.BeginLifetimeScope())
{
var handler = scope.Resolve<MessageHandlers>();
logger.WriteLine(AsInvariant($"Handling message with label {message.Label}"));
// To avoid coupling Microsoft.Azure.WebJobs the return type is IEnumerable<T>
var outputMessages = await handler.OnMessageAsync(message).ConfigureAwait(false);
foreach (var outputMessage in outputMessages)
{
queue.Add(outputMessage);
}
}
}
If the prerequisites for the handler aren't fulfilled, outputMessages contains a BrokeredMessage with the same MessageId, Label and payload as the one we are currently handling, but it contains a ScheduledEnqueueTimeUtcin the future.
The idea is that we complete the handling of the current message quickly and wait for a retry by scheduling the new message in the future.
Sometimes, especially when there are more messages in the Queue than the SDK peek-locks, I see messages duplicating in the ServiceBus queue. They have the same MessageId, Label and payload, but a different SequenceNumber, EnqueuedTimeUtc and ScheduledEnqueueTimeUtc. They all have a delivery count of 1.
Looking at my handler code, the only way this can happen is if I received the same message multiple times, figure out that I need to wait and create a new message for handling in the future. The handler finishes successfully, so the original message gets completed.
The initial messages are unique. Also I put the SingletonAttribute on the message handler, so that messages for the same MessageId cannot be consumed by different handlers.
Why are multiple handlers triggered with the same message and how can I prevent that from happening?
I am using the Microsoft.Azure.WebJobs version is v2.1.0
The duration of my handlers are at max 17s and in average 1s. The lock duration is 1m. Still my best theory is that something with the message (re)locking doesn't work, so while I'm processing the handler, the lock gets lost, the message goes back to the queue and gets consumed another time. If both handlers would see that the critical resource is still occupied, they would both enqueue a new message.
After a little bit of experimenting I figured out the root cause and I found a workaround.
If a ServiceBus message is completed, but the peek lock is not abandoned, it will return to the queue in active state after the lock expires.
The ServiceBus QueueClient, apparently, abandons the lock, once it receives the next message (or batch of messages).
So if the QueueClient used by the WebJobs SDK terminates unexpectedly (e.g. because of the process being ended or the Web App being restarted), all messages that have been locked appear back in the Queue, even if they have been completed.
In my handler I am now completing the message manually and also abandoning the lock like this:
public static async Task ProcessQueueMessageAsync([ServiceBusTrigger("%QueueName%")] BrokeredMessage message, [ServiceBus("%QueueName%")]ICollector<BrokeredMessage> queue, TextWriter logger)
{
using (var scope = Program.Container.BeginLifetimeScope())
{
var handler = scope.Resolve<MessageHandlers>();
logger.WriteLine(AsInvariant($"Handling message with label {message.Label}"));
// To avoid coupling Microsoft.Azure.WebJobs the return type is IEnumerable<T>
var outputMessages = await handler.OnMessageAsync(message).ConfigureAwait(false);
foreach (var outputMessage in outputMessages)
{
queue.Add(outputMessage);
}
await message.CompleteAsync().ConfigureAwait(false);
await message.AbandonAsync().ConfigureAwait(false);
}
}
That way I don't get the messages back into the Queue in the reboot scenario.

Global vs User Queue

I want to have a UI button press trigger a block of code, so I created a queue and dispatched a block to it async, but I'm not seeing the block start in a reasonable amount of time.
minimized example:
class InterfaceController: WKInterfaceController {
...
let queue = DispatchQueue(label: "unique_label", qos: .userInteractive)
#IBAction func on_press() {
print("Touch")
queue.async {
// Stuff
}
}
}
So I see the "Touch" line in the console, but nothing from the async block happens.
Odd thing is, if I use let queue = DispatchQueue.global() instead, it seems to work as desired. So what is the operational difference between making my own queue, and using the global one here? I would have expected my QoS to give it some CPU time.
So what is the operational difference between making my own queue, and
using the global one here?
let queue = DispatchQueue(label: "unique_label", qos: .userInteractive)
creates the .serial queue with high priority
let queue = DispatchQueue.global()
doesn't actually create nothing but returns global (system) .concurrent queue with qos .default.
When you create Your own queue, the system will decide, on which global queue it will dispatch your execution request. The queue is not an execution engine ...
I am not able to believe, that your code is never executed, it is very unlikely to be true. If it happened, the trouble must be somewhere in your code which is not part of your question.