I'm new to the Microsoft Concurrency Runtime (and asynchronous programming in general), and I'm trying to understand what can and can't be done with it. Is it possible to create a task group such that the tasks execute in the order in which they were added, and any task doesn't start until the previous one ends?
I'm trying to understand if there's a more general and devolved way of dealing with tasks compared to chaining several tasks within a single member function. For example, let's say I have a program that creates resources at different points in a program, and the order in which the resources are allocated matters. Could any resource allocation function that was called simply append a task to the end of a central task list, with the result that the tasks execute in the order in which they were added (i.e. the order in which the resource allocation functions were called)?
Thanks,
RobertF
I'm not sure I understand what you're trying to achieve, but, are you looking for Agent or Actor model?
You post messages to an Async Agent and it processes them. It can then send messages to other agents.
Related
Good day!
I faced the challenge of writing the function for allocation of the agents into SelectOutputOut blocks. Considering various scenarios of using if...else statements in function I understand that all possibilities must be covered (as suggested here).
However, the problem is that I don't want the agent to leave the function before it gets the appropriate SelectOutputOut block. This situation may occur if there are not enough resources in any Service blocks (Network1, Network2 or Network3). In this case, it is necessary to wait for any Service block will have enough resources for servicing the agent. For this purpose, I tried to use the while loop, but it doesn't help.
The questions are:
How to write the if-else statements to force the agent waits for enough resources in any Service block
Does the Select function monitor the parameters which are outside it? In other words: Does it know about the states of Service blocks during its execution?
Thank you.
What you need to do is have your agents wait in the queue and then have a function to remove them from the queue and then send them to the correct service block. The best way to do this is with an enter block where you can send them to.
See example below
You then need to call this function at the On enter code for the queue as well as the On exit code for the service blocks, to ensure you are always sending new agents when there is space.
The author of asio, Christopher Kohlhoff, is working on a library and proposal for executors in C++. His work so far includes this repo and docs. Unfortunately, the rationale portion has yet to be written. So far, the docs give a few examples of what the library does but I don't feel like I'm missing something. Somehow this is more than a family of fancy invoker functions.
Everything I can find on Google is very Java specific and a lot of it is particular to specific frameworks so I'm having trouble figuring out what this "executor pattern" is all about.
What are executors in this context? What do they do? What are the canonical examples of when they would be helpful? What variations exist among executors? What are the alternatives to executors and how do they compare? In particular, there seems to be a lot of overlap with an event loop where the events are initial input events, execution events, and a shutdown event.
When trying to figure out new abstractions I usually find understanding the motivation key. So for executors, what are we trying to abstract and why? What are we trying to make generic? Without executors, what extra work would we have to do?
The most basic benefit of executors is separating the definition of a program's parallelism from how it's used. Java's executor model exists because, by and large, you don't actually know, when you're first writing code, what parallelism model is best for your scenario. You might have little to gain from parallelism and shouldn't use threads at all, you might do best with a long running dedicated worker thread for each core, or a dynamically scaling pool of threads based on current load that cleans up threads after they've been idle a while to reduce memory usage, context switches, etc., or maybe just launching a thread for every task on demand, exiting when the task is done.
The key here is it's nigh impossible to know which approach is best when you're first writing code. You may know where parallelism might help you, but in traditional threading, you end up intermingling the parallelism "configuration" (when and whether to create threads) with the use of parallelism (determining which functions to call with what arguments). When you do mix the code like this, it's a royal pain to do performance testing of different options, because each and every thread launch is independent, and must be updated separately.
The main benefit of the executor model is that the parallelism configuration is done in one place (where the executor is created), and the users of that executor don't have to know anything about it. They just submit work to the executor, receive a future, and at some later point, retrieve the result (blocking if necessary) from the future. If you want to experiment with other configurations, you change the one line defining the executor and run your code again. Even if you decide you need to use different parallelism models for different sections of your code, refactoring to add a second executor and change some of the users of the first executor to use the second is easy compared to manually rewriting the threading details of every site; as long as the executor's name is (relatively) unique, finding users and changing them to use a different one is pretty easy. Executors both simplify your code (by avoiding intermingling thread creation/management with the tasks the threads do) and simplify performance testing.
As a side-benefit, you also abstract away the complexities of transferring data into and out of a worker thread (the submit method encapsulates the former, the future's result method encapsulates the latter). std::async gets you some of this benefit, but with no real control over the parallelism involved (just a yes/no/maybe choice of whether to force a thread, force deferred execution in the current thread, or let the compiler/library decide, with no fine grained control over whether a thread pool is used, and if so, how it behaves). A true executor framework gives you the control std::async fails to provide, with similar ease of use.
We use a PPL Concurrency::TaskScheduler to dispatch events from our media pipeline to subscribed clients (typically a GUI app).
These events are C++ lambdas passed to Concurrency::TaskScheduler::ScheduleTask().
But, under load, the pipeline can generate events at a greater rate than the client can consume them.
Is there a PPL strategy I can use to cause the event dispatcher to not queue an event (in reality, a scheduled task) if the 'queue' of scheduled tasks is greater than N? And if not, how would I roll my own?
Looking at the API, it appears that there's no way to know if the scheduler is going under heavy load or not, nor is there a way to tell it how to behave in such circumstances. My understanding is that while it is possible to set limits on how many conurrent threads may run within a scheduler using policies, the protocol by which the scheduler may accept or refuse new tasks isn't clear to me.
My bet is that you will have to implement that mechanism yourself, by counting how many tasks are in the scheduler already, and have a size limited queue ahead of the scheduler which help you mitigate the flow of incoming tasks.
I suppose that you could use a simple std::queue for your lambdas, and each time you have a new event, you check how many tasks are running, and add as many from the queue as possible to reach your max running task count.
If the queue is still full after that, then you refuse the new task.
To handle the running tasks accounting, you could wrap your tasks with a function decrementing the counter at completion time (use a mutex to avoid races), and increment the counter when scheduling a new task.
I'm improving an application (Win64, C++) by making it more asynchronous. I'm using the Concurrency Runtime and it's worked great for me so far.
The application basically executes a number of 'jobs' transforming data. To track what each job does, certain subsystems are instrumented with code to track certain operations that the job performs. Previously this would use a single global variable representing the currently executing job to be able to register tracking information without having to pass context information all the way down the calling chain. Each job may also turn use the ConcRT to parallelize the job itself. This all works quite well.
Now though, I am refactoring the application so that we can execute the top-level jobs in parallel. Each job is executed as a ConcRT task, and this works well for all jobs except those which need tracking.
What I basically need is a way to associate some context information with a Task, and have that flow to any other tasks spawned by that task. Basically, I need "Task Local" variables.
With ConcRT we can't simply use thread locals to store the context information, since the job may spawn other jobs using ConcRT and these will execute on any number of threads.
My current approach involves creating a number of Scheduler instances at startup, and spawning each job in a scheduler dedicated to that job. I can then use the Concurrency::CurrentScheduler::Id() function to retrieve an integer ID which I can use as a key to figure out the context. This works but single-stepping through the Concurrency::CurrentScheduler::Id() in assembly makes me wince somewhat since it performs multiple virtual function calls and safety checks which adds quite a lot of overhead, which is a bit of a problem since this lookup needs to be done at an extremely high rate in some cases.
So - is there some better way to accomplish this? I would have loved to have a first-class TaskLocal/userdata mechanism which allowed me to associate a single context pointer with the current Scheduler/SchedulerGroup/Task which I could retrieve with very little overhead.
A hook which is called whenever a ConcRT thread grabs a new task would be my ideal, as I could then retrieve the Scheduler/ScheduleGroup ID and store it in a thread local for minimal access overhead. Alas, I can't see any way to register such a hook and it doesn't seem to be possible to implement custom Scheduler classes for PPL/agents (see this article).
Is there some reason that you can't pass some sort of context object to these tasks that gives them an interface for updating their status? Because from where I'm standing, it sounds like you have a really bad problem with Singletons (aka global variables), one that should be solved with dependency injection.
If dependency injection isn't an option, there is another strategy for dealing with Singletons. That strategy is basically allowing the Singleton to be a 'stack'. You can 'push' a new value to the Singleton, and then everybody who accesses it gets this new value. And then you can 'pop' the value back off and the value before pushing is restored. This does not have to be directly modeled with an actual stack, which is why I put the words 'push', 'pop' and 'stack' in quotes.
You can adapt this model to your circumstance by having a thread local Singleton that is initialized with the value (not the whole stack of values, just the top value) of the parent thread's version of this variable. Then, if a new context is required for this thread and its children you can push a new value onto the thread-local Singleton.
I am comparing a task queue/thread pool pattern system to an n-threads system in D. I'm really new to the D programming language but have worked with threads in C, Java, and Python before. I'm using the Tango library, and I'm building a webserver as an example.
I decided to use tango.core.ThreadPool as my thread pool, as my project is focused on ease of use and performance between traditional threading and task queues.
The documentation shows that I have 3 options:
ThreadPool.wait() - Blocks the current thread while the pool consumes tasks from the queue.
ThreadPool.shutdown() - Finishes the tasks in the pool but not the ones in the queue.
ThreadPool.finish() - Finishes all tasks in the pool and queue, but then accept no more.
None of these things are what I want. It is my understanding that your list of tasks should be able to grow in these systems. The web server is very simple and naïve; I just want it to try its best at scaling to many concurrent requests, even if its resource management only consists of consuming things in the task queue as quickly as possible.
I suspect that it's because the main thread needs to join the other threads, but I'm a bit rusty on my threading knowledge.
what about void append(JobD job, Args args) ? from the docs it works like the Executor.execute(Runnable) form java (submit a task to be run some time in the future)
note that here it is a LIFO queue instead of the expected FIFO queue so allocate enough workers
I discovered that the way I was constructing my delegate contributed to blocking in some part of the code. Instead of closing over the object returned by SocketServer.accept, I now pass that object as a parameter to my delegate. I don't know why this was the solution, but the program now works as expected. I heard that closures in D version 1 are broken; maybe this has something to do with it.