Refactoring a colleague's code, and I'm looking for the equivalent of dispatch_barrier_async in swift 3. There are a lot of queues at play, and his design is to block only this queue, and only for this single operation.
// Swift 2.3
func subscribe(subscriber: DaoDelegate) {
dispatch_barrier_async(self.subscribers.q) { // NOTE: barrier, requires exclusive access for write
//...
}
}
// Swift 3
func subscribe(subscriber: DaoDelegate) {
(self.subscribers.q).async { // (Not equivalent, no barrier on the concurrent queue)
//...
}
}
Can I keep that same functionality in Swift 3 without refactoring all the queue types?
The async() method has a flags parameter which accepts a .barrier
option:
func subscribe(subscriber: DaoDelegate) {
(self.subscribers.q).async(flags: .barrier) {
//...
}
}
Related
I have a function that queues function callbacks to be executed in another thread.
void Queue(std::function<void()> callback)
{
std::lock_guard<std::mutex> lock(queueMutex);
queue.push_back(callback);
}
The queued functions are called using this function in the main thread:
void ProcessQueue()
{
std::lock_guard<std::mutex> lock(queueMutex);
if (!queue.empty())
{
for (auto& cb : queue)
{
cb();
}
queue.clear();
}
}
I am queuing these callbacks because they must be executed in the main thread. My question is whether it's safe (and appropriate) to chain multiple functions within a single Queue call, like this:
Queue([=]()
{
FunctionA();
FunctionB();
});
Or is it better to separate them like this?
Queue([=]()
{
FunctionA();
});
Queue([=]()
{
FunctionB();
});
I would choose the first option. In the second version, the additional cost of creating and calling the lambda. Since you use capture by value [=], it can be quite costly when capturing heavy objects.
std::function also affects performance.
The second version uses two std::function copies the lambdas with captured parameters for the call, more memory is used here, plus std::function calls the function using a virtual call and this affects performance. Considering these facts, in my opinion, the first version is better.
I'm trying to run a list of futures concurrently (instead of in sequence) in Rust async-await (being stabilized soon), until any of them resolves to true.
Imagine having a Vec<File>, and a future to run for each file yielding a bool (may be unordered). Here would be a simple sequenced implementation.
async fn my_function(files: Vec<File>) -> bool {
// Run the future on each file, return early if we received true
for file in files {
if long_future(file).await {
return true;
}
}
false
}
async fn long_future(file: File) -> bool {
// Some long-running task here...
}
This works, but I'd like to run a few of these futures concurrently to speed up the process. I came across buffer_unordered() (on Stream), but couldn't figure out how to implement this.
As I understand it, something like join can be used as well to run futures concurrently, given that you gave a multithreaded pool. But I don't see how that could efficiently be used here.
I attempted something like this, but couldn't get it to work:
let any_true = futures::stream::iter(files)
.buffer_unordered(4) // Run up to 4 concurrently
.map(|file| long_future(file).await)
.filter(|stop| stop) // Only propagate true values
.next() // Return early on first true
.is_some();
Along with that, I'm looking for something like any as used in iterators, to replace the if-statement or the filter().next().is_some() combination.
How would I go about this?
I think that you should be able to use select_ok, as mentioned by Some Guy. An example, in which I've replaced the files with a bunch of u32 for illustration:
use futures::future::FutureExt;
async fn long_future(file: u32) -> bool {
true
}
async fn handle_file(file: u32) -> Result<(), ()> {
let should_stop = long_future(file).await;
// Would be better if there were something more descriptive here
if should_stop {
Ok(())
} else {
Err(())
}
}
async fn tims_answer(files: Vec<u32>) -> bool {
let waits = files.into_iter().map(|f| handle_file(f).boxed());
let any_true = futures::future::select_ok(waits).await.is_ok();
any_true
}
I'm trying to implement some network application using Boost.Asio. I have a problem with multiple layers of callbacks. In other languages that natively support async/await syntax, I can write my logic like this
void do_send(args...) {
if (!endpoint_resolved) {
await resolve_async(...); // results are stored in member variables
}
if (!connected) {
await connect_async(...);
}
await send_async(...);
await receive_async(...);
}
Right now I have to write it using multiple layers of callbacks
void do_send(args...) {
if (!endpoint_resolved) {
resolve_async(..., [captures...](args...) {
if (!connected) {
connect_async(..., [captures...](args...) {
send_async(..., [captures...](args...) {
receive_async(..., [captures...](args...) {
// do something
}); // receive_async
}); // send_async
}); // connect_async
}
});
}
}
This is cumbersome and error-prone. An alternative is to use std::bind to bind member functions as callbacks, but this does not solve the problem because either way I have to write complicated logic in the callbacks to determine what to do next.
I'm wondering if there are better solutions. Ideally I would like to write code in a synchronous way while I can await asynchronously on any I/O operations.
I've also checked std::async, std::future, etc. But they don't seem to fit into my situation.
Boost.Asio's stackful coroutines would provide a good solution. Stackful coroutines allow for asynchronous code to be written in a manner that reads synchronous. One can create a stackful coroutine via the spawn function. Within the coroutine, passing the yield_context as a handler to an asyncornous operation will start the operation and suspend the coroutine. The coroutine will be resumed automatically when the asynchronous operation completes. Here is the example from the documentation:
boost::asio::spawn(my_strand, do_echo);
// ...
void do_echo(boost::asio::yield_context yield)
{
try
{
char data[128];
for (;;)
{
std::size_t length =
my_socket.async_read_some(
boost::asio::buffer(data), yield);
boost::asio::async_write(my_socket,
boost::asio::buffer(data, length), yield);
}
}
catch (std::exception& e)
{
// ...
}
}
I'm trying to write my own torrent program based on libtorrent rasterbar and I'm having problems getting the alert mechanism working correctly. Libtorrent offers function
void set_alert_notify (boost::function<void()> const& fun);
which is supposed to
The intention of of the function is that the client wakes up its main thread, to poll for more alerts using pop_alerts(). If the notify function fails to do so, it won't be called again, until pop_alerts is called for some other reason.
so far so good, I think I understand the intention behind this function. However, my actual implementation doesn't work so good. My code so far is like this:
std::unique_lock<std::mutex> ul(_alert_m);
session.set_alert_notify([&]() { _alert_cv.notify_one(); });
while (!_alert_loop_should_stop) {
if (!session.wait_for_alert(std::chrono::seconds(0))) {
_alert_cv.wait(ul);
}
std::vector<libtorrent::alert*> alerts;
session.pop_alerts(&alerts);
for (auto alert : alerts) {
LTi_ << alert->message();
}
}
however there is a race condition. If wait_for_alert returns NULL (since no alerts yet) but the function passed to set_alert_notify is called before _alert_cw.wait(ul);, the whole loop waits forever (because of second sentence from the quote).
For the moment my solution is just changing _alert_cv.wait(ul); to _alert_cv.wait_for(ul, std::chrono::milliseconds(250)); which reduces number of loops per second enough while keeping latency low enough.
But it's really more workaround then solution and I keep thinking there must be proper way to handle this.
You need a variable to record the notification. It should be protected by the same mutex that owns the condition variable.
bool _alert_pending;
session.set_alert_notify([&]() {
std::lock_guard<std::mutex> lg(_alert_m);
_alert_pending = true;
_alert_cv.notify_one();
});
std::unique_lock<std::mutex> ul(_alert_m);
while(!_alert_loop_should_stop) {
_alert_cv.wait(ul, [&]() {
return _alert_pending || _alert_loop_should_stop;
})
if(_alert_pending) {
_alert_pending = false;
ul.unlock();
session.pop_alerts(...);
...
ul.lock();
}
}
I currently have two threads a producer and a consumer. The producer is a static methods that inserts data in a Deque type static container and informs the consumer through boost::condition_variable that an object has been inserted in the deque object . The consumer then reads data from the Deque type and removes it from the container.The two threads communicate using boost::condition_variable
Here is an abstract of what is happening. This is the code for the consumer and producer
//Static Method : This is the producer. Different classes add data to the container using this method
void C::Add_Data(obj a)
{
try
{
int a = MyContainer.size();
UpdateTextBoxA("Current Size is " + a);
UpdateTextBoxB("Running");
MyContainer.push_back(a);
condition_consumer.notify_one(); //This condition is static member
UpdateTextBoxB("Stopped");
}
catch (std::exception& e)
{
std::string err = e.what();
}
}//end method
//Consumer Method - Runs in a separate independent thread
void C::Read_Data()
{
while(true)
{
boost::mutex::scoped_lock lock(mutex_c);
while(MyContainer.size()!=0)
{
try
{
obj a = MyContainer.front();
....
....
....
MyContainer.pop_front();
}
catch (std::exception& e)
{
std::string err = e.what();
}
}
condition_consumer.wait(lock);
}
}//end method
Now the objects being inserted in the Deque type object are very fast about 500 objects a second.While running this I noticed that TextBoxB was always at "Stopped" while I believe it was suppose to toggle between "Running" and "Stoped". Plus very slow. Any suggestions on what I might have not considered and might be doing wrong ?
1) You should do MyContainer.push_back(a); under mutex - otherwise you would get data race, which is undefined behaviour (+ you may need to protect MyContainer.size(); by mutex too, depending on it's type and C++ISO/Compiler version you use).
2) void C::Read_Data() should be:
void C::Read_Data()
{
scoped_lock slock(mutex_c);
while(true) // you may also need some exit condition/mechanism
{
condition_consumer.wait(slock,[&]{return !MyContainer.empty();});
// at this line MyContainer.empty()==false and slock is locked
// so you may pop value from deque
}
}
3) You are mixing logic of concurrent queue with logic of producing/consuming. Instead you may isolate concurrent queue part to stand-alone entity:
LIVE DEMO
// C++98
template<typename T>
class concurrent_queue
{
queue<T> q;
mutable mutex m;
mutable condition_variable c;
public:
void push(const T &t)
{
(lock_guard<mutex>(m)),
q.push(t),
c.notify_one();
}
void pop(T &result)
{
unique_lock<mutex> u(m);
while(q.empty())
c.wait(u);
result = q.front();
q.pop();
}
};
Thanks for your reply. Could you explain the second parameter in the conditional wait statement [&]{return !MyContainer.empty();}
There is second version of condition_variable::wait which takes predicate as second paramter. It basically waits while that predicate is false, helping to "ignore" spurious wake-ups.
[&]{return !MyContainer.empty();} - this is lambda function. It is new feature of C++11 - it allows to define functions "in-place". If you don't have C++11 then just make stand-alone predicate or use one-argument version of wait with manual while loop:
while(MyContainer.empty()) condition_consumer.wait(lock);
One question in your 3rd point you suggested that I should Isolate the entire queue while My adding to the queue method is static and the consumer(queue reader) runs forever in a separate thread. Could you tell me why is that a flaw in my design?
There is no problem with "runs forever" or with static. You can even make static concurrent_queue<T> member - if your design requires that.
Flaw is that multithreaded synchronization is coupled with other kind of work. But when you have concurrent_queue - all synchronization is isolated inside that primitive, and code which produces/consumes data is not polluted with locks and waits:
concurrent_queue<int> c;
thread producer([&]
{
for(int i=0;i!=100;++i)
c.push(i);
});
thread consumer([&]
{
int x;
do{
c.pop(x);
std::cout << x << std::endl;
}while(x!=11);
});
producer.join();
consumer.join();
As you can see, there is no "manual" synchronization of push/pop, and code is much cleaner.
Moreover, when you decouple your components in such way - you may test them in isolation. Also, they are becoming more reusable.