Dispatch queue clear explanation - swift3

I know there are already a lot of posts about dispatch queues, async tasks etc. ,but I can't retrieve a useful explanation out of these posts, because there are too many distractions because of the extra code. I there someone who can give me a clear instruction on how to make Task B start after task A has been finished?
I need some data from Task A in order to run Task B successfully and I know that I have to do something with DispatchQueue.async, but I don't know how exactly.

The typical process would be to dispatch asynchronously with async to some serial queue. So, let's say you want some queue for processing images, doing task A and then task B, and then do some UI updates when task B is done, you might do:
let queue = DispatchQueue(label: Bundle.main.bundleIdentifier! + ".images")
queue.async {
// do task A
}
queue.async {
// do task B
}
queue.async {
// do whatever else is needed after B here
DispatchQueue.main.async {
// update model objects and UI here
}
}
This is a pattern that avoids blocking the main queue, but lets you make sure that you do A and B serially.
Please note, that if either task A or task B are, themselves, asynchronous, the above won't work. (Nor would trying to use sync, if the underlying task was asynchronous.) Other patterns would apply in these cases. But your example is too generic and there are simple too many other possible patterns for us to enumerate them all. If you tell us specifically what task A and B are doing, we could offer more constructive counsel.
Also note that I'd explicitly advise against dispatching synchronously (with sync). Using sync has a certain intuitive appeal, but it is rarely the right approach. Blocking the calling thread (which is what sync does) largely defeats the purpose of having dispatch queue in the first place. The (largely) only reason one should use sync is if you're trying to have thread-safe access to some shared resource. But most of the time, you use dispatch queues explicitly for the purpose of getting some time consuming task off the current thread. So, dispatch A and B async to serial queue, and if you wanted to do something else, C, afterwards, then you'd dispatch that async to the same queue, too.
For a description see Concurrency Programming Guide: Dispatch Queues. The examples are in Objective-C, but all the concepts are the same. You can also go to WWDC videos and search for "GCD", and you'll get a number of great videos that walk through Grand Central Dispatch (the broader term for dispatch queue technologies).

How about something like this?
import Dispatch
let queue = DispatchQueue(label: "My dispatch queue") //TODO: Give better label
let result1 = queue.sync { // "Task A"
return "result 1"
}
let result2 = queue.sync { // "Task B", which uses result from Task A
return result1.uppercased()
}
print(result2)

Related

Implementing a custom async task type and await

I am developing a C++ app in which i need to receive messages from an MQ and then parsing them according to their type and for a particular reason I want to make this process (receiving a single message followed by processing it) asynchronous. Since, I want to keep things as simple as possible in a way that the next developer would have no problem continuing the code, I have written a very small class to implement Asynchrony.
I first raise a new thread and pass a function to the thread:
task = new thread([&] {
result = fn();
isCompleted = true;
});
task->detach();
and in order to await the task I do the following:
while (!isCompleted && !(*cancelationToken))
{
Sleep(5);
}
state = 1; // marking the task as completed
So far there is no problem and I have not faced any bug or error but I am not sure if this is "a good way to do this" and my question is focused on determining this.
Read about std::future and std::async.
If your task runs in another core or processor, the variable isCompleted may become un-synchronized having two copies in core cache. So you may be waiting more than needed.
If you have to wait for something it is better to use a semaphore.
As said in comments, using standard methods is better anyway.

Reading all available messages from mpsc UnboundedReceiver without blocking unnecessarily

I have an futures::sync::mpsc::unbounded channel. I can send messages to the UnboundedSender<T> but have problems receiving them from the UnboundedReciever<T>.
I use the channel to send messages to the UI thread, and I have a function that gets called every frame, and I'd like to read all the available messages from the channel on each frame, without blocking the thread when there are no available messages.
From what I've read the Future::poll method is kind of what I need, I just poll, and if I get Async::Ready, I do something with the message, and if not, I just return from the function.
The problem is the poll panics when there is no task context (I'm not sure what that means or what to do about it).
What I tried:
let (sender, receiver) = unbounded(); // somewhere in the code, doesn't matter
// ...
let fut = match receiver.by_ref().collect().poll() {
Async::Ready(items_vec) => // do something on UI with items,
_ => return None
}
this panics because I don't have a task context.
Also tried:
let (sender, receiver) = unbounded(); // somewhere in the code, doesn't matter
// ...
let fut = receiver.by_ref().collect(); // how do I run the future?
tokio::runtime::current_thread::Runtime::new().unwrap().block_on(fut); // this blocks the thread when there are no items in the receiver
I would like help with reading the UnboundedReceiver<T> without blocking the thread when there are no items in the stream (just do nothing then).
Thanks!
You are using futures incorrectly -- you need a Runtime and a bit more boilerplate to get this to work:
extern crate tokio;
extern crate futures;
use tokio::prelude::*;
use futures::future::{lazy, ok};
use futures::sync::mpsc::unbounded;
use tokio::runtime::Runtime;
fn main() {
let (sender, receiver) = unbounded::<i64>();
let receiver = receiver.for_each(|result| {
println!("Got: {}", result);
Ok(())
});
let rt = Runtime::new().unwrap();
rt.executor().spawn(receiver);
let lazy_future = lazy(move || {
sender.unbounded_send(1).unwrap();
sender.unbounded_send(2).unwrap();
sender.unbounded_send(3).unwrap();
ok::<(), ()>(())
});
rt.block_on_all(lazy_future).unwrap();
}
Further reading, from Tokio's runtime model:
[...]in order to use Tokio and successfully execute tasks, an application must start an executor and the necessary drivers for the resources that the application’s tasks depend on. This requires significant boilerplate. To manage the boilerplate, Tokio offers a couple of runtime options. A runtime is an executor bundled with all necessary drivers to power Tokio’s resources. Instead of managing all the various Tokio components individually, a runtime is created and started in a single call.
Tokio offers a concurrent runtime and a single-threaded runtime. The concurrent runtime is backed by a multi-threaded, work-stealing executor. The single-threaded runtime executes all tasks and drivers on thee current thread. The user may pick the runtime with characteristics best suited for the application.

Global vs User Queue

I want to have a UI button press trigger a block of code, so I created a queue and dispatched a block to it async, but I'm not seeing the block start in a reasonable amount of time.
minimized example:
class InterfaceController: WKInterfaceController {
...
let queue = DispatchQueue(label: "unique_label", qos: .userInteractive)
#IBAction func on_press() {
print("Touch")
queue.async {
// Stuff
}
}
}
So I see the "Touch" line in the console, but nothing from the async block happens.
Odd thing is, if I use let queue = DispatchQueue.global() instead, it seems to work as desired. So what is the operational difference between making my own queue, and using the global one here? I would have expected my QoS to give it some CPU time.
So what is the operational difference between making my own queue, and
using the global one here?
let queue = DispatchQueue(label: "unique_label", qos: .userInteractive)
creates the .serial queue with high priority
let queue = DispatchQueue.global()
doesn't actually create nothing but returns global (system) .concurrent queue with qos .default.
When you create Your own queue, the system will decide, on which global queue it will dispatch your execution request. The queue is not an execution engine ...
I am not able to believe, that your code is never executed, it is very unlikely to be true. If it happened, the trouble must be somewhere in your code which is not part of your question.

Play Framework 2.4 Sequential run of multiple Promises

I have got a Play 2.4 (Java-based) application with some background Akka tasks implemented as functions returning Promise.
Task1 downloads bank statements via bank Rest API.
Task2 processes the statements and pairs them with customers.
Task3 does some other processing.
Task2 cannot run before Task1 finishes its work. Task3 cannot run before Task2. I was trying to run them through sequence of Promise.map() like this:
protected F.Promise run() throws WebServiceException {
return bankAPI.downloadBankStatements().map(
result -> bankProc.processBankStatements().map(
_result -> accounting.checkCustomersBalance()));
}
I was under an impression, that first map will wait until Task1 is done and then it will call Task2 and so on. When I look into application (tasks are writing some debug info into log) I can see, that tasks are running in parallel.
I was also trying to use Promise.flatMap() and Promise.sequence() with no luck. Tasks are always running in parallel.
I know that Play is non-blocking application in nature, but in this situation I really need to do things in right order.
Is there any general practice on how to run multiple Promises in selected order?
You're nesting the second call to map, which means what's happening here is
processBankStatements
checkCustomerBalance
downloadBankStatements
Instead, you need to chain them:
protected F.Promise run() throws WebServiceException {
return bankAPI.downloadBankStatements()
.map(statements -> bankProc.processBankStatements())
.map(processedStatements -> accounting.checkCustomersBalance());
}
I notice you're not using result or _result (which I've renamed for clarity) - is that intentional?
Allright, I found a solution. The correct answer is:
If you are chaining multiple Promises in the way I do. That means, in return of map() function you are expecting another Promise.map() function and so on, you should follow these rules:
If you are returning non-futures from mapping, just use map()
If you are returning more futures from mapping, you should use flatMap()
The correct code snippet for my case is then:
return bankAPI.downloadBankStatements().flatMap(result -> {
return bankProc.processBankStatements().flatMap(_result -> {
return accounting.checkCustomersBalance().map(__result -> {
return null;
});
});
});
This solution was suggested to me a long time ago, but it was not working at first. The problem was, that I had a hidden Promise.map() inside function downloadBankStatements() so the chain of flatMaps was broken in this case.

Asynchronous network calls

I made a class that has an asynchronous OpenWebPage() function. Once you call OpenWebPage(someUrl), a handler gets called - OnPageLoad(reply). I have been using a global variable called lastAction to take care of stuff once a page is loaded - handler checks what is the lastAction and calls an appropriate function. For example:
this->lastAction == "homepage";
this->OpenWebPage("http://www.hardwarebase.net");
void OnPageLoad(reply)
{
if(this->lastAction == "homepage")
{
this->lastAction = "login";
this->Login(); // POSTs a form and OnPageLoad gets called again
}
else if(this->lastAction == "login")
{
this->PostLogin(); // Checks did we log in properly, sets lastAction as new topic and goes to new topic URL
}
else if(this->lastAction == "new topic")
{
this->WriteTopic(); // Does some more stuff ... you get the point
}
}
Now, this is rather hard to write and keep track of when we have a large number of "actions". When I was doing stuff in Python (synchronously) it was much easier, like:
OpenWebPage("http://hardwarebase.net") // Stores the loaded page HTML in self.page
OpenWebpage("http://hardwarebase.net/login", {"user": username, "pw": password}) // POSTs a form
if(self.page == ...): // now do some more checks etc.
// do something more
Imagine now that I have a queue class which holds the actions: homepage, login, new topic. How am I supposed to execute all those actions (in proper order, one after one!) via the asynchronous callback? The first example is totally hard-coded obviously.
I hope you understand my question, because frankly I fear this is the worst question ever written :x
P.S. All this is done in Qt.
You are inviting all manner of bugs if you try and use a single member variable to maintain state for an arbitrary number of asynchronous operations, which is what you describe above. There is no way for you to determine the order that the OpenWebPage calls complete, so there's also no way to associate the value of lastAction at any given time with any specific operation.
There are a number of ways to solve this, e.g.:
Encapsulate web page loading in an immutable class that processes one page per instance
Return an object from OpenWebPage which tracks progress and stores the operation's state
Fire a signal when an operation completes and attach the operation's context to the signal
You need to add "return" statement in the end of every "if" branch: in your code, all "if" branches are executed in the first OnPageLoad call.
Generally, asynchronous state mamangment is always more complicated that synchronous. Consider replacing lastAction type with enumeration. Also, if OnPageLoad thread context is arbitrary, you need to synchronize access to global variables.