What can cause a dask distributed future to have the state 'lost'? - python-2.7

Using a dask distributed cluster, I've noticed, that several of the futures of long running tasks switch from pending to finished, others switch from pending to lost.
I have the suspicion, that some of the lost tasks are still running, as I see dask-worker processes with a high CPU usage even if no futures have the status pending anymore.
What exactly does lost mean here? Can long-running tasks (hours) be classified as lost as they might stop the worker from reporting back to the scheudler? What else could cause the state lost and how does the scheduler react to this?

This means that for some reason the scheduler no longer has the information necessary to execute this task. Commonly this is due to non-resilient data being lost by a worker going down, such as if you explicitly scatter a piece of data to a single worker and then that worker fails.
>>> future = client.scatter(123)
>>> x = client.submit(f, future)
... worker holding future/123 dies
>>> x.status
'lost'
This is rare in general though. Usually if a worker goes down the scheduler can replicate all of the work for a particular task elsewhere.
As always, providing a minimal complete verifiable example can help to isolate what's going on in your particular situation.

Related

Notifying a task from multiple other tasks without extra work

My application is futures-based with async/await, and has the following structure within one of its components:
a "manager", which is responsible for starting/stopping/restarting "workers", based both on external input and on the current state of "workers";
a dynamic set of "workers", which perform some continuous work, but may fail or be stopped externally.
A worker is just a spawned task which does some I/O work. Internally it is a loop which is intended to be infinite, but it may exit early due to errors or other reasons, and in this case the worker must be restarted from scratch by the manager.
The manager is implemented as a loop which awaits on several channels, including one returned by async_std::stream::interval, which essentially makes the manager into a poller - and indeed, I need this because I do need to poll some Mutex-protected external state. Based on this state, the manager, among everything else, creates or destroys its workers.
Additionally, the manager stores a set of async_std::task::JoinHandles representing live workers, and it uses these handles to check whether any workers has exited, restarting them if so. (BTW, I do this currently using select(handle, future::ready()), which is totally suboptimal because it relies on the select implementation detail, specifically that it polls the left future first. I couldn't find a better way of doing it; something like race() would make more sense, but race() consumes both futures, which won't work for me because I don't want to lose the JoinHandle if it is not ready. This is a matter for another question, though.)
You can see that in this design workers can only be restarted when the next poll "tick" in the manager occurs. However, I don't want to use a too small interval for polling, because in most cases polling just wastes CPU cycles. Large intervals, however, can delay restarting a failed/canceled worker by too much, leading to undesired latencies. Therefore, I though I'd set up another channel of ()s back from each worker to the manager, which I'd add to the main manager loop, so when a worker stops due to an error or otherwise, it will first send a message to its channel, resulting in the manager being woken up earlier than the next poll in order to restart the worker right away.
Unfortunately, with any kinds of channels this might result in more polls than needed, in case two or more workers stop at approximately the same time (which due to the nature of my application, is somewhat likely to happen). In such case it would make sense to only run the manager loop once, handling all of the stopped workers, but with channels it will necessarily result in the number of polls equal to the number of stopped workers, even if additional polls don't do anything.
Therefore, my question is: how do I notify the manager from its workers that they are finished, without resulting in extra polls in the manager? I've tried the following things:
As explained above, regular unbounded channels just won't work.
I thought that maybe bounded channels could work - if I used a channel with capacity 0, and there was a way to try and send a message into it but just drop the message if the channel is full (like the offer() method on Java's BlockingQueue), this seemingly would solve the problem. Unfortunately, the channels API, while providing such a method (try_send() seems to be like it), also has this property of having capacity larger than or equal to the number of senders, which means it can't really be used for such notifications.
Some kind of atomic or a mutex-protected boolean flag also look as if it could work, but there is no atomic or mutex API which would provide a future to wait on, and would also require polling.
Restructure the manager implementation to include JoinHandles into the main select somehow. It might do the trick, but it would result in large refactoring which I'm unwilling to make at this point. If there is a way to do what I want without this refactoring, I'd like to use that first.
I guess some kind of combination of atomics and channels might work, something like setting an atomic flag and sending a message, and then skipping any extra notifications in the manager based on the flag (which is flipped back to off after processing one notification), but this also seems like a complex approach, and I wonder if anything simpler is possible.
I recommend using the FuturesUnordered type from the futures crate. This collection allows you to push many futures of the same type into a collection and wait for any one of them to complete at once.
It implements Stream, so if you import StreamExt, you can use unordered.next() to obtain a future that completes once any future in the collection completes.
If you also need to wait for a timeout or mutex etc., you can use select to create a future that completes once either the timeout or one of the join handles completes. The future returned by next() implements Unpin, so it is usable with select without problems.

Standard way to wait for all tasks to finish before exiting

I was wondering - is there a straightforward way to wait for all tasks to finish running before exiting without keeping track of all the ObjectIDs (and get()ing them)? Use case is when I launch #remotes for saving output, for example, where there is no return result needed. It's just extra stuff to keep track of if I have to store those futures.
Currently there is no standard way to block until all tasks have finished.
There are some workarounds that can be used.
Keep track of all of the object IDs in a list object_ids and then call ray.get(object_ids) or ray.wait(object_ids, num_returns=len(object_ids)).
Loop as long as some resources are being used.
import time
while (ray.global_state.cluster_resources() !=
ray.global_state.available_resources()):
time.sleep(1)
The above code will loop until it detects that no tasks are currently being executed. However this is not a foolproof approach. It's possible that there could be a moment in time when no tasks are running but a scheduler a task is about to start running.

Is there a limit to the number of created events?

I'm developing a C++14 Windows DLL on VS2015 that runs on all Windows version >= XP.
TL;DR
Is there a limit to the number of events, created with CreateEvent, with different names of course?
Background
I'm writing a thread pool class.
The class interface is simple:
void AddTask(std::function<void()> task);
Task is added to a queue of tasks and waiting workers (vector <thread>) activate the task when available.
Requirement
Wait (block) for a task for a little bit before continuing with the flow. Meaning, some users of ThreadPool, after calling AddTask, may want to wait for a while (say 1 second) for the task to end, before continuing with the flow. If the task is not done yet, they will continue with the flow anyways.
Problem
ThreadPool class cannot provide Wait interface. Not its responsibility.
Solution
ThreadPool will SetEvent when task is done.
Users of ThreadPool will wait (or not. depend on their need) for the event to be signaled.
So, I've changed the return value of ThreadPool::AddTask from void to int where int is a unique task ID which is essentially the name of the event to be singled when a task is done.
Question
I don't expect more than ~500 tasks but I'm afraid that creating hundreds of events is not possible or even a bad practice.
So is there a limit? or a better approach?
Of course there is a limit (if nothing else; at some point the system runs out of memory).
In reality, the limit is around 16 million per process.
You can read more details here: https://blogs.technet.microsoft.com/markrussinovich/2009/09/29/pushing-the-limits-of-windows-handles/
You're asking the wrong question. Fortunately you gave enough background to answer your real question. But before we get to that:
First, if you're asking what's the maximum number of events a process can open or a system can hold, you're probably doing something very very wrong. Same goes for asking what's the maximum number of files a process can open or what's the maximum number of threads a process can create.
You can create 50, 100, 200, 500, 1000... but where does it stop? If you're even considering creating that many of them that you have to ask about a limit, you're on the wrong track.
Second, the answer depends on too many implementation details: OS version, amount of RAM installed, registry settings, and maybe more. Other programs running also affect that "limit".
Third, even if you knew the limit - even if you could somehow calculate it at runtime based on all the relevant factors - it wouldn't allow you to do anything that you can't already do now.
Lets say you find out the limit is L and you have created exactly L events by now. Another task come in. What do you do? Throw away the task? Execute the task without signaling an event? Wait until there are fewer than L events and only then create an event and start executing the task? Crash the process?
Whatever you decide you can do it just the same when CreateEvent fails. All of this is completely pointless. And this is yet another indication that you're asking the wrong question.
But maybe the most wrong thing you're doing is saying "the thread pool class can't provide wait because it's not its responsibility, so lets have the thread pool class provide an event for each task that the thread pool will signal when the task ends" (in paraphrase).
It looks like by the end of the sentence you forgot the premise from the beginning: It's not the thread pool's responsibility!
If you want to wait for the task to finish have the task itself signal when it's done. There's no reason to complicate the thread pool because someone, sometimes want to wait on tasks. Signaling that the task is done is the task's job:
event evt; ///// this
thread_pool.queue([evt] {
// whatever
evt.signal(); ///// and this
});
auto reason = wait(evt, 1s);
if (reason == timeout) {
log("bummer");
}
The event class could be anything you want - a Windows event, and std::promise and std::future pair, or anything else.
This is so simple and obvious.
Complicating the thread pool infrastructure, taking up valuable system resources for nothing, and signaling synchronization primitives even when no one's listening just to save the two marked code lines above in the few cases where you actually want to wait for the task is unjustifiable.

What happens to running processes on a continuous Azure WebJob when website is redeployed?

I've read about graceful shutdowns here using the WEBJOBS_SHUTDOWN_FILE and here using Cancellation Tokens, so I understand the premise of graceful shutdowns, however I'm not sure how they will affect WebJobs that are in the middle of processing a queue message.
So here's the scenario:
I have a WebJob with functions listening to queues.
Message is added to Queue and job begins processing.
While processing, someone pushes to develop, triggering a redeploy.
Assuming I have my WebJobs hooked up to deploy on git pushes, this deploy will also trigger the WebJobs to be updated, which (as far as I understand) will kick off some sort of shutdown workflow in the jobs. So I have a few questions stemming from that.
Will jobs in the middle of processing a queue message finish processing the message before the job quits? Or is any shutdown notification essentially treated as "this bitch is about to shutdown. If you don't have anything to handle it, you're SOL."
If we are SOL, is our best option for handling shutdowns essentially to wrap anything you're doing in the equivalent of DB transactions and implement your shutdown handler in such a way that all changes are rolled back on shutdown?
If a queue message is in the middle of being processed and the WebJob shuts down, will that message be requeued? If not, does that mean that my shutdown handler needs to handle requeuing that message?
Is it possible for functions listening to queues to grab any more queue messages after the Job has been notified that it needs to shutdown?
Any guidance here is greatly appreciated! Also, if anyone has any other useful links on how to handle job shutdowns besides the ones I mentioned, it would be great if you could share those.
After no small amount of testing, I think I've found the answers to my questions and I hope someone else can gain some insight from my experience.
NOTE: All of these scenarios were tested using .NET Console Apps and Azure queues, so I'm not sure how blobs or table storage, or different types of Job file types, would handle these different scenarios.
After a Job has been marked to exit, the triggered functions that are running will have the configured amount of time (grace period) (5 seconds by default, but I think that is configurable by using a settings.job file) to finish before they are exited. If they do not finish in the grace period, the function quits. Main() (or whichever file you declared host.RunAndBlock() in), however, will finish running any code after host.RunAndBlock() for up to the amount of time remaining in the grace period (I'm not sure how that would work if you used an infinite loop instead of RunAndBlock). As far as handling the quit in your functions, you can essentially "listen" to the CancellationToken that you can pass in to your triggered functions for IsCancellationRequired and then handle it accordingly. Also, you are not SOL if you don't handle the quits yourself. Huzzah! See point #3.
While you are not SOL if you don't handle the quit (see point #3), I do think it is a good idea to wrap all of your jobs in transactions that you won't commit until you're absolutely sure the job has ran its course. This way if your function exits mid-process, you'll be less likely to have to worry about corrupted data. I can think of a couple scenarios where you might want to commit transactions as they pass (batch jobs, for instance), however you would need to structure your data or logic so that previously processed entities aren't reprocessed after the job restarts.
You are not in trouble if you don't handle job quits yourself. My understanding of what's going on under the covers is virtually non-existent, however I am quite sure of the results. If a function is in the middle of processing a queue message and is forced to quit before it can finish, HAVE NO FEAR! When the job grabs the message to process, it will essentially hide it on the queue for a certain amount of time. If your function quits while processing the message, that message will "become visible" again after x amount of time, and it will be re-grabbed and ran against the potentially updated code that was just deployed.
So I have about 90% confidence in my findings for #4. And I say that because to attempt to test it involved quick-switching between windows while not actually being totally sure what was going on with certain pieces. But here's what I found: on the off chance that a queue has a new message added to it in the grace period b4 a job quits, I THINK one of two things can happen: If the function doesn't poll that queue before the job quits, then the message will stay on the queue and it will be grabbed when the job restarts. However if the function DOES grab the message, it will be treated the same as any other message that was interrupted: it will "become visible" on the queue again and be reran upon the restart of the job.
That pretty much sums it up. I hope other people will find this useful. Let me know if you want any of this expounded on and I'll be happy to try. Or if I'm full of it and you have lots of corrections, those are probably more welcome!

How to setup ZERO-MQ architecture to deal with workers of different speed

[as a small context provider: I am new to networking and ZERO-MQ, but I did spend quite a bit of time on the guide and examples]
I have the following challenge (done in C++, but irrelevant to the question). I have a single source that generates tasks. I have multiple engines that need to process those tasks, and send back the result.
First attempt:
I created a client with a ZMQ_PUSH socket. The engines have a ZMQ_PULL socket. To get the answers back to the client, I created the reverse: a ZMQ_PUSH on the workers and a ZMQ_PULL on the client. It worked out of the box. Only to find out that after some time the client ran out of memory since I was pushing way more requests than the workers could process. I need some backpressure.
Second attempt:
I added a counter on the client that took care of only pushing when no more than say 1000 tasks were 'in progress'. The out of memory issue was solved, since I was never having more than 1000 'in progress' tasks. But ... some workers were slower than others. Since PUSH/PULL uses fair queueing, the amount of work for that slow worker kept increasing and increasing...until the slowest worker had all 1000 requests queued and the others were starved. I was not using my workers effectively.
Now, what architecture could I use that solves the issue of 'workers with different speed'? Is the 'count the number of in progress tasks' approach a good way of balancing the number of pushed requests? Or is there a way I can PUSH tasks to the workers, and the pushing blocks on a predefined point? Can I do that with HWM?
I am sure this problem is of such a generic nature that I should be able to easily deal with this. Can anyone point me in the right direction?
Thanks!
we used the Paranoid Pirate Protocol http://rfc.zeromq.org/spec:6,
but in case of many very small jobs, where the overhead of communication might be high, a credit-based flow control pattern might be more efficient. http://unprotocols.org/blog:15
in both cases it is necessary for the requester to directly assign jobs to individual workers. this is abstracted away of course and, depending on the use-case, could be made available as a sync call, which returns when all tasks have been processed.