I have got some experience using GCD for concurrency and removing explicit locks and threads.
C++11 provides std::async, which seems to provide some similar features(, I'm not a C++ expert, please don't blame me for mistakes on it).
Putting aside arguments on flavours and language preferences, is there any benchmark comparing the two for their performance, especially for platforms like iOS?
Is c++11's std::async worth trying from a practical perspective?
EDIT:
As stackmonster answered, C++11 does not provide something exactly like a dispatch queue per se. However, isn't it possible to make an ad-hoc serial queue with atomic data structures(, and arguable lambda functions) to achieve this?
C++ 11 std::async is not nearly as sophisticated as grand central dispatch.
Its more analogous to the async concurrency model provided by the java.util.concurrent package
providing templates for callbacks but with no built in performance advantages.
I would say that the difference between them is simply this.
A callback template has no particular performance characteristics. GCD is all about performance, and threading/multiplexing those callbacks to reduce thread creation overhead and allowing queuing and task dependencies and thread pooling.
The launch policies of std::async do not compare in their sophistication to GCD and are not implementation portable.
Im not really sure what a benchmark between the two would really prove since they are not that really similar.
As others have already pointed out, this comparison is generally meaningless due to its apples/oranges nature, though if you really wanted to I suppose you could test std::async and std::future against some GCD based futures implementation you cobble together yourself and see which provides futures the quickest for a known set of computations. Might be vaguely interesting, but you'd have to be the one to do it since the experiment is likely too strange and esoteric to be of interest to anyone else. :-)
Related
There are some similar questions on SO. But they only ask the question one way.
std::latch has an advantage over std::barrier that unlike latter, former can be decremented by a participating thread more than once.
std::barrier has an advantage over std::latch that unlike latter, former can be reused once the arriving threads are unblocked at a phase's synchronization point.
But my question is, why have two almost identical things in the first place? Why they decided not combine both of them into one, something like Java Phaser?
Phaser was introduced in Java7, as a more flexible option over CountDownLatch and CyclicBarrier, which were introduced in Java5. It has almost identical API as that of both former classes. (Here I took Java's example just to show that combining them is indeed possible.)
Instead of providing a single phaser class, they decided to separately provide latch and barrier, then there must be some benefit from having them separately, mostly some performance related issue. So, what is that issue precisely?
C++ has an principle of not paying for something that you don't use. If std::barrier can be implemented more efficient than std::latch for the case when you don't use it twice from the same thread - there is a reason to provide a more efficient idiom together with a more generic.
As for "what is that issue precisely": C++ has no virtual machine that equalizes all systems. Moreover, STL doesn't specify the exact implementation of the class. So the implementation of latch/barrier is the matter of a system, vendor or taste of the developer.
There seems to be a myriad of implementations for 'coroutines' or asynchronous logic in clojure, many of the talks by Rich Hickey and other potential authorities on the matter are from almost a decade ago and I'm trying to find out what is the latest and greatest, best practice way to handle this problem.
My favorite abstraction for this type of thing is lua coroutines, but I think these may be a strictly imperative style of doing things, and I'm a little confused as to what the functional way is instead.
In lua though it's really simple and easy with coroutines to:
A) Non-busy wait for X seconds.
B) Non-busy wait for a variable or function to be a specific value, such as true
A can probably be achieved using setTimeout, but B can't really, at least I don't know how. I'm also not sure setTimeout is the best practice for these types of problems?
In a 2013 blog post, Rich Hickey describes the motivations for clojure.core.async. While the JVM has some applications, the primary motive was to give the illusion of threads to the single-threaded Javascript environment.
The "simulated multithreading" provided by clojure.core.async is not as robust as using actual JVM threads (especially when Exceptions/Errors occur), so it is of limited use for JVM Clojure. This will be even more true when Java virtual threads become a reality.
So if you are in ClojureScript, clojure.core.async is much better than nothing (i.e. callback hell). However, even JS is contemplating a multithreading model via WebAssembly, so an alternative to clojure.core.async could exist for ClojureScript in the future.
It just seems strange to me that despite having a very large set of constructs for multithreading, the standard lacks a thread pool class. What reasons might dissuade the committee from adding this to the standard?
C++, like C, is meant to give as much control to the programmer as possible. Almost everything in C++ is a wrapper that is very bare-bones. This give the programmer freedom to implement whatever feature they want however they want.
The concept of "what is work" is a bit abstract and dependent on the use case, so C++ gives you the workers (threads), and lets you define a strategy for how you want that work to be distributed amongst the workers.
For example, in Python, you can map work to threads. Using this means that whenever work is available, a thread will take the work. But what if you want a thread to only do work if there is work to do AND after certain conditions are met. You can design your thread_pool class to meet all these specifications. In Python, you'd have to handle these checks separately outside of the thread pooling library.
While there is no OFFICIAL answer, this is the answer that I would say makes more sense. C++ is about control given a minimal amount of tools (however an EXTENDED set compared to C). The committee is most likely not adding a thread_pool class because the hardest thing to do in Computer Science is getting people to agree. Thread pooling is not necessarily extremely hard to implement, and defining a definition of worker is arguably harder.
I'm recently learn F# asynchronous workflows, which is an important feature of F# concurrency. What confused me is that how many approaches to write concurrent code in F#? I read Except F#, and some blog about F# concurrency, I know things like background workers; IAsyncResult; If programming on local machine, there is shard-memory concurrency in F#; If programming on distributed system, there is message-passing concurrency. But I really not sure what is the relationship between these techniques, and how to classify them. I understand it is quite a "big" question cannot be answer with one or two sentences, so I would definitely appreciate if anyone can give me specific answer or recommend me some useful references.
I'm also rather new to F#, so I hope more answers come to complement this one :)
The first thing is you need to distinguish between .NET classes (which can be used from any .NET language) and F# unique ways to deal with asynchronous operations. In the first case as you mention and among others, you have:
System.ComponentModel.BackgroundWorker: This was used mainly in the first .NET versions with Windows Forms and it's not recommended anymore.
System.IAsyncResult: This is also an old .NET interface implemented by several classes (also Task) but I don't usually use it directly.
Windows.Foundation.IAsyncOperation: Another interface but used only in Windows Store apps. Most of the times you translate it directly to Task, so you don't have to worry too much about it.
System.Threading.Tasks.Task: This is the recommended way now to handle .NET asynchronous and parallel (with the Parallel Task Library) operations. It's the hidden force behind C# async/await keywords, which are just syntactic sugar to pass continuations to Tasks.
So now with F# unique ways: Asynchronous Workflows and MailboxProcessor. It can roughly be said the former corresponds to parallelism while the latter deals with concurrency.
Asynchronous Workflows: This is just a computation expression (aka monad) which happens to deal with asynchrony: operations that run in the background to prevent blocking the UI thread or parallel operations to get the maximum performance in multi-core systems.
It's more or less the equivalent to C# async/await but we F# fans like to think it's a more elegant solution because it uses a more generic and flexible mechanism (computation expressions) which can be adapted for example to asynchronous sequences, events or even Javascript callbacks. It has also other advantages as Thomas Petricek explains here.
Within an asynchronous workflow most of the time you'll be using the methods in Control.Async or the extensions to .NET classes (like WebRequest.AsyncGetResponse) from the F# Core Library. If necessary, you can also interact directly with .NET Tasks (Async.AwaitTask and Async.StartAsTask) or even easily create your own async operations with Async.StartWithContinuations.
To learn more about asynchronous workflows you can consult the MSDN documentation, the magnificent Scott Wlaschin's site, Tomas Petricek's blog or the F# Wikibook.
Control.MailboxProcessor: Designed to deal with concurrency, that is, several processes running at the same time which usually need to share some information. The traditional .NET way to prevent memory corruption when several threads try to write a variable at the same time was the lock statement. Besides the fact that functional style prefers to use immutable values, memory locks are complicated to use properly and can also have a high performance penalty. So instead of this, MailboxProcessor uses an Erlang-like message-based (or actor-based) approach to concurrency.
I have not used MailboxProcessor myself that much, but for more info you can check Scott Wlaschin's site or the F# Wikibook.
I hope this helps! If someone sees something not completely correct in this answer, please feel free to edit it.
Cheers!
Does anyone know of a decent reference for synchronization issues in C++? I'm thinking of something similar to the C++ FAQ lite (and the FQA lite) but with regards to concurrency, locking, threading, performance issues, guidelines, when locks are needed and when they aren't, dealing with multithreaded library code that you can't control, etc. I don't care about the inner issues of how different lock types can be implemented etc, I just use boost for that.
I'm sure there are a lot of good books out there, I'd prefer something (preferably online) that I can use as a goto for when a question or an issue pops up in my mind. I'm not really a beginner to this all so I would like a concise reference for all those different types of situations that can popup when writing multithreaded libraries that use other multithreaded libraries.
Like:
When is it better to have one big lock protecting a bunch of data vs a bunch of smaller locks protecting each piece of data? (what are the costs associated with having lots of locks? Resource acquisition costs? Locking time performance costs?)
What's the performance hit of pushing something onto a queue and having another thread pop the queue vs dealing that that data in the original thread?
Are there any simple idioms to make sure things just work when you're not so concerned about performance?
Anyway, I just want to know if there are any decent references out there that people use.
I'd recommend two resources:
Herb Sutter's Effective Concurrency articles
Anthony Williams's C++ Concurrency In Action (not yet published, but available as a PDF)