I have created an gRPC async client written in C++ which makes both streaming and unary requests to a server, using a completion queue.
In the destructor of the client class the Shutdown method of the completion queue is called, then I thought I could call Next to drain the queue and obtain the pending tags, but instead the call to Next blocks everything.
The pending tags are needed since they are objects create with new and must be deleted to avoid leaks.
What is the correct way to drain a queue used for an async client?
It should be that 1 tag into the completion queue, 1 tag out, so all the pending ops will get their tags returned from Next (even if the RPC gets canceled).
The symptom that Next blocks is likely due to there are pending events that is not finished.
You may like to use TryCancel to terminate the call quickly
Related
When a polling Lambda reads messages it puts them in-flight during its execution. When it finished it deletes the message. But instead I want the Lambda to not delete the messages, but keep them inflight instead, and pass the receipt handle to an external process. This external process will use the message receipt and delete it on finishing.
Its difficult to say for sure, because it depends on how you set it up.
If you use event source mappings (ESM) to automatically invoke lambda based on your SQS queue then the Lambda automatically deletes the messages from the queue when your function finishes successfully:
When your function successfully processes a batch, Lambda deletes its messages from the queue.
The only way to make it not delete the messages upon completion of your function is to crash it:
If the function returns an error, all retries are attempted on the affected messages before Lambda receives additional messages from the same group.
Since purposely erroring out a function is not a good practice, I think the best way would be to have second SQS queue and simply re-broadcast the message for the second process.
I have a task queue which users can push tasks onto, only one task can run at a time enforced by the concurrency setting for the queue. In some cases (e.g. long running task) they may wish to cancel a running task in order to free up the queue to process the next task.
To achieve this I have been running the task queue as a Flask application, should a user wish to cancel a task I call the delete_task method of the python client library for a given queue & task.
However, I am seeing that the underlying task continues to be processed even after the task has been deleted. Have been trying to find documentation of how Cloud Tasks handles a task being deleted, but haven't found anything concrete.
Hoping that i'd be able to listen for a signal of some sort in order to gracefully shut down the process if a deletion is received. Or that the underlying process would be killed if the parent task is deleted.
Has anyone worked with the Cloud Tasks API before? Is it correct to assume that a deleted task will cleanup any processes that are running?
I don't see how a worker would be able to find out that the task it is working on has been deleted.
In the eyes of the worker, a task is an incoming Http request. I don't know how the Queue could tell that specific process to stop. I'm fairly certain that "deleting" a task just removes it from the Queue only.
You'd have to build a custom 'cancel' function that would be able to reach out to this worker.
Or this worker would have to periodically check with the Queue to see if its task still exists.
https://cloud.google.com/tasks/docs/reference/rest/v2/projects.locations.queues.tasks/get
https://googleapis.dev/python/cloudtasks/latest/gapic/v2/api.html#google.cloud.tasks_v2.CloudTasksClient.get_task
I'm not actually sure what the Queue will return if you try to call 'get task' a deleted task since i don't see a 'status' property for task. Maybe it will return an error like 'task does not exist'
I want to send a reply after I have persisted and updated the state of the actor using persistAll. Unfortunately I have not found a callback or onSucces handler to send back a reply after the last event has been persisted.
This is a shortcoming of the API, there is no built in way to react on all persistAll completing, you will have to keep a counter or a set of completed persists yourself and only trigger your logic when the last persist completes.
As far as I remember this cannot be easily fixed because it would break binary and source compatibility.
In the "next generation" persistent actors (in Akka typed) this works more as you would expect and the side effect you want to execute on successful persist of the events will only execute once, when all the events are complete.
I have an actor processing messages and storing its results via asynchronous API (ReactiveMongo). IE when computation is completed actor is asking ReactiveMongo to store computation result and that call is non blocking.
How can I stop actor processing next messages until last ReactiveMongo request feature will be completed? Also mailbox should be able to receive incoming messages.
Blocking solution
Simple and wrong answer: you can do this by blocking the actor, just call Await (or whatever similar method in the language do you use).
It is wrong because Do not block inside the actor.
Not blocking solution
Master\Worker pattern is a good for this problem: http://letitcrash.com/post/29044669086/balancing-workload-across-nodes-with-akka-2
So your worker actor will send the "Work Done" message after ReactiveMongo request feature completion. Then master actor will send new "Do this work" message to the worker.
I want to create a web application were a client calls a REST Webservice. This returns OK-Status for the client (with a link to the result) and creates a new message on an activeMQ Queue. On the listeners side of the activeMQ there should be worker who process the messages.
Iam stucking here with my concept, because i dont really know how to determine the number of workers i need. The workers only have to call web service interfaces, so no high computation power is needed for the worker itself. The most time the worker has to wait for returning results from the called webservice. But one worker can not handle all requests, so if a limit of requests in the queue is exceeded (i dont know the limit yet), another worker should treat the queue.
What is the best practise for doing this job? Should i create one worker per Request and destroying them if the work is done? How to dynamically create workers based on the queue size? Is it better to run these workers all the time or creating them when the queue requiere that?
I think a Topic/Suscriber architecture is not reasonable, because only one worker should care about one request. Lets imagine of 100 Requests per Minute average and 500 requests on high workload.
My intention is to get results fast, so no client have to wait for it answer just because not properly used ressources ...
Thank you
Why don't you figure out the max number of workers you'd realistically be able to support, and then make that number and leave them running forever? I'd use a prefetch of either 0 or 1, to avoid piling up a bunch of messages in one worker's prefetch buffer while the others sit idle. (Prefetch=0 will pull the next message when the current one is finished, whereas prefetch=1 will have a single message sitting "on deck" available to be processed without needing to get it from the network but it means that a consumer might be available to consume a message but can't because it's sitting in another consumer's prefetch buffer waiting for that consumer to be read for it). I'd use prefetch=0 as long as the time to download your messages from the broker isn't unreasonable, since it will spread the workload as evenly as possible.
Then whenever there are messages to be processed, either a worker available to process the next message (so no delay) or all the workers are processing messages (so of course you're going to have to wait because you're at capacity, but as soon as there's a worker available it will take the next message from the queue).
Also, you're right that you want queues (where a message will be consumed by only a single worker) not topics (where a message will be consumed by each worker).