I've faced quite an odd problem with QtConcurrent, mostly because of strange programming desires, maybe it's just an XY-problem, but...
So, there is my code, trying to communicate with the database, a backend code actually (on Qt, yes). It has to work quick and handle some requests, so I need a thread pool. As a well-known fact I suppose the connection establishing itself is a very time-consuming operation, so there is a need in persistent database connections resulting in persistent threads (QSqlDatabase cannot be moved around between the threads). Also it is quite natural to want asynchronous request handling, thus resulting in some need of a simple way to pass them to the persistent threads.
Nothing too complex, lets assume there already exists some boilerplate in a form like...
// That's what I want for now
QFuture<int> res = workers[i]->async(param1, param2);
// OR
// That's what I DO NOT want to get
workers[i]->async(param1, param2, [](QFuture<int> res) { // QFuture to pass exceptions
// callback here
});
That can be done for sure. Why not std::future? Well, it is much easier to use QFutureWatcher and it's signals for notifications about result's readiness. Pure C++ notification solutions are muuuch more complex and callbacks are also someting that has to be dragged through the class hierarchy. Each worker interfaces a thread with DB connections, obviously.
Okay, all of that can be written, but... custom thread pool would mean no QtConcurrent convenience, there seem to be only risky ways to create that QFuture so that it could be returned by the custom worker. QThreadPool is of no use, because it would be a whole big story to create persistent runnables in it. More to say, the boilerplate I've briefly described is gonna be some kind of project's core, used in many places, not something to be easily replaced by a 100 hand-made thread managings.
In short: if I could construst a QFuture for my results, the problem would be solved.
Could anyone point me to a solution or a workaround? Would be grateful for any bright ideas.
UPD:
#VladimirBershov offered a good modern solution which implements observer pattern. After some googling I've found a QPromise library. Of course, constructing a custom QFuture is still hacky and can be only done via undocumented QFutureInterface class, but still some "promise-like" solution makes asynchronous calls neater by far as I can judge.
You can use AsyncFuture library as a custom QFuture creation tool or ideas source:
AsyncFuture - Use QFuture like a Promise object
QFuture is used together with QtConcurrent to represent the result of
an asynchronous computation. It is a powerful component for
multi-thread programming. But its usage is limited to the result of
threads, it doesn't work with the asynchronous signal emitted by
QObject. And it is a bit trouble to setup the listener function via
QFutureWatcher.
AsyncFuture is designed to enhance the function to offer a better way
to use it for asynchronous programming. It provides a Promise object
like interface. This project is inspired by AsynQt and RxCpp.
Features:
Convert a signal from QObject into a QFuture object
Combine multiple futures with different type into a single future object
Use Future like a Promise object
Chainable Callback - Advanced multi-threading programming model
Convert a signal from QObject into a QFuture object:
#include "asyncfuture.h"
using namespace AsyncFuture;
// Convert a signal from QObject into a QFuture object
QFuture<void> future = observe(timer, &QTimer::timeout).future();
/* Listen from the future without using QFutureWatcher<T>*/
observe(future).subscribe([]() {
// onCompleted. It is invoked when the observed future is finished successfully
qDebug() << "onCompleted";
},[]() {
// onCanceled
qDebug() << "onCancel";
});
My idea is to use thread pools with maximum 1 thread available for each.
QThreadPool* persistentThread = new QThreadPool; // no need to write custom thread pool
persistentThread->setMaxThreadCount(1);
persistentThread->setExpiryTimeout(-1);
and then
QFuture<int> future_1 = QtConcurrent::run(persistentThread, func_1);
QFuture<int> future_2 = QtConcurrent::run(persistentThread, func_2);
func_2 will be executed after func_1 in the same one "persistent" thread.
Related
I'm trying to create a clean and efficient design for passing off events to a background thread for evaluation, then return a selected result to the game thread.
This is my initial design
//Occurrence object passed from director on game thread to background thread OccurrenceQueue
//Execute BackgroundThread::EvaluateQueues()
//If OccurrenceQueue.Dequeue()
//Score Occurrence, then return via Occurrence.GetOwner()->PurposeSelected(Occurrence)
//EvaluateQueues()
This resulted in a clean loop of selecting a chain of purposes from an event. So now I want to move this to a background thread. Here is what I've learned so far:
Thread Safety (in UE) requires absolutely no modification of UObject
data from other threads (From what I read this is due to their custom
GC)
You can lock objects and/or design so that objects in background
thread aren't touched by game thread, but there is still a risk of
unexpected behavior due to lifetime not being extended by background
thread and synchonization issues
You cannot simply execute a function on a game thread existing object
to move the callstack back to game thread
Calling Occurrence.GetOwner()->PurposeSelected(Occurrence) from a
background thread remains in the background thread
This is the main subject I'd like to get a better understanding of
This applies to delegates in UE as well
TQueues in UE can be used across threads safely
From what I've learned above, my current design doesn't appear to be logically possible.
These are my alternatives thus far:
Use two queues
One to dequeue and score on the background thread
The other to dequeue the result on game thread via tick
Use a delegate existing on the game thread, which calls OccurrenceEvaluated.Broadcast() through tick
When a result is scored, bind Occurrence.GetOwner()->PurposeSelected(Occurrence) to OccurrenceEvaluated
I've seen that c++ utilizes something called Future() (or something like that) for ASync tasks, and it appears UE has something similar with TFuture<> && TFuture.IsReady(), but I have yet to look deeper into that and how it returns data
Same with FAsyncTask
I'm hesitant to implement any design which utilizes tick to check if data has been updated/returned from background threads.
Can anyone suggest relevant design practices, or clarify the nature of returning execution to a main thread from a background thread (I've had a hard time finding the right question to research/info regarding this)?
I found a perfect solution. As these events aren't particularly time sensitive, I just use Unreal Engine's AsyncTask() to schedule an async task on the game thread from my background thread. As #Pepjin Kramer pointed out is the same as std::async.
So simple it's basically a slap in the face.
Goodmorning,
Threading can help you get things done faster/more responsive but has a lot of pitfalls. The short answer in situations like this I use these kind of constructs
#include <future>
#include <thread>
#include <vector>
int main()
{
// make data shared
auto data = std::make_shared<std::vector<int>>();
data -> push_back(42);
// future will be a future<int> deduced from return from vector
// async will run lambda in other thread
// capture data by value! It copies the shared_ptr and will
// increase its reference count.
auto future = std::async(std::launch::async, [data]
{
auto value = data[0];
return value;
// shared_ptr data goes out of scope and reference count to data is decreased
});
// synchronize with thread and get "calculated" value
auto my_value = future.get();
// now data on the main thread goes out of scope reference count is decreased. Only when both the thread are done AND this function exits then data is deleted.
return 0;
}
Most objects take up memory and during assignments that memory isn't updated in 1 clock cycle. You can protect memory with sdt::mutex and std::scoped_lock. Try looking for those.
You often need to synchronize information/processing between threads, look at std::condition_variable (but they have pitfalls https://www.modernescpp.com/index.php/c-core-guidelines-be-aware-of-the-traps-of-condition-variables)
C++ has good support classes, std::thread and std::async/std::future. Personally I like the solution with std::future because you can return values and exceptions
(!) from other threads when you call future.get()
Learn about lambdas and captures, they're vital for use with std::thread/std::async
Life cycle of objects! When sharing objects between threads you must be sure that one thread doesn't delete objects in use by other threads. Don't use raw pointers and/or new/delete when using threads. I personally often use std::make_shared/std::shared_ptr's to data/objects when sharing information between threads.
Another tricky thing is that sometimes you cannot be sure work in another thread has started after creating it. E.g. When std::async returns a thread is created but it isn't guaranteed to have really started (operating system scheduling etc...) . If you want to be really sure it has started then after the async call returns you will have to wait on a condition_variable you set at the start of the thread function.
I hope these remarks can get you started.
I'm fuguring out how to use Futures with non-blocking event driven code (in a separate thread or not, both) but how can i end the future from a slot (~resolve the promise based on an signal) ?
QByteArray RfidCardReader::startTask(QByteArray send)
{
if(this->busy==false) {
this->sendFrame(send);
QObject::connect(this, &RfidCardReader::frameReady,
[=]() {/*this must be the startTask return*/ return this->int_read_buffer;});
} else {
throw 0;//Handle a queue instead
}
}
QFuture<QByteArray> RfidCardReader::send(QByteArray passed_send)
{
return QtConcurrent::run(QThreadPool::globalInstance(), this->startTask, passed_send);
}
basically what I want to do using only an instance is wrap a serial device (whic is sync by nature) in a queue of Futures but with only non blocking code using signals like &QIODevice::bytesWritten &QIODevice::readyRead etc... if there are better approches to the problem let me know, i would be glad to know the right way to write readable async code in Qt without blocking in separate threads
A serial device is asynchronous by nature, and using the serial port concurrently from multiple threads is undefined behavior. You can certainly resolve futures from any thread, but there's nothing in Qt that will give you a future on the same thread. Recall that a QFuture is not a class that you can sensibly instantiate. The default-constructed class is useless.
To get an idea of how to handle asynchronous serial I/O, see for example this answer.
Then you can use the undocumented <QFutureInterface> header, and create your own implementation that can wrap higher-level aspects of your protocol, i.e. commands/requests. You could then group such futures, and use a single watcher to determine when they are done.
Your approach is quite interesting in fact, and I might develop a complete example.
I have some existing code that uses std::future/std::promise that I'd like to integrate with a Qt GUI cleanly.
Ideally, one could just:
std::future<int> future{do_something()};
connect(future, this, &MyObject::resultOfFuture);
and then implement resultOfFuture as a slot that gets one argument: the int value that came out of the std::future<int>. I've added this suggestion as a comment on QTBUG-50676. I like this best because most of my future/promises are not concurrent anyway, so I'd like to avoid firing up a thread just to wait on them. Also, type inference could then work between the future and the slot's parameter.
But it seems to me that this shouldn't be hard to implement using a wrapper Qt object (e.g., a version of QFutureWatcher that takes a std::future<int>). The two issues with a wrapper are:
the wrapper will have to be concrete in its result type.
the watcher would have to be concurrent in a thread?
Is there a best-practice to implement this sort of connection? Is there another way that can hook into the Qt main loop and avoid thread creation?
std::future is missing continuations. The only way to turn the result of a std::future asynchronously into a function call delivering the result is to launch a thread watching it, and if you want to avoid busy-waiting you need one such thread per std::future, as there is no way to lazy-wait on multiple futures at once.
There are plans to create a future with continuation (a then operation), but they are not in C++ as of c++17 let alone c++11.
You could write your own system of future/promise that mimics the interface of std::future and std::promise that does support continuations, or find a library that already did that.
A busy-wait solution that regularly checked if the future was ready could avoid launching a new thread.
In any case, std::experimental::then would make your problem trivial.
future.then( [some_state](auto future){
try {
auto x = future.get();
// send message with x
} catch( ... ) {
// deal with exception
}
} );
you can write your own std::experimetnal::future or find an implementation to use yourself, but this functionality cannot be provided without using an extra thread with a std::future.
I need to imlement in cocoa, a design that relies on multiple threads.
I started at the CoreFoundation level - I created a CFMessagePort and attached it to the CFRunLoop, but it was very inconvenient as (unlike on other platforms) it needs to have a (systemwide) unique name, and CFMessagePortSendRequest does not process callbacks back to the current thread while waiting. Its possible to create my own CFRunLoopSource object, but building my own thread safe queue seems like overkill.
I then switched from using POSIX threads to NSThreads, calling performSelector:onThread: to send messages to other threads. This is far easier to use than the CFMessagePort mechanism, but again, performSelector:onThread: does not allow the main thread to send messages back to the current thread - and there is no return value.
All I need is a simple - inprocess - mechanism (so I hopefully don't need to invent schemes to create 'unique' names) that lets me send a message (and wait for a reply) from thread A to thread B, and, while waiting for the message, allow thread B to send a message (and wait for a reply) to/from thread A.
A simple: A calls B re-entrantly calls A situation that's so usual on a single thread, but is deadlock hell when the messages are between threads.
use -performSelectorOnThread:withObject:waitUntilDone:. The object you pass would be something that has a property or other "slot" that you can put the return value in. e.g.
SomeObject* retObject = [[SomeObject alloc] init];
[anotherObject performSelectorOnThread: whateverThread withObject: retObject waitUntilDone: YES];
id retValue = [retObject retValue];
If you want to be really sophisticated about it, instead of passing an object of a class you define, use an NSInvocation object and simply invoke it on the other thread (make sure not to invoke the same NSInvocation on two threads simultaneously) e.g.
[invocation performSelectorOnMainThread:#selector(invoke) withObject:NULL waitUntilDone:YES];
Edit
if you don't want to wait for the processing on the other thread to complete and you want a return value, you cannot avoid the other thread calling back into your thread. You can still use an invocation e.g.
[comObject setInvocation: myInvocation];
[comObject setCallingThread: [NSThread currentThread]];
[someObject performSelectorOnMainThread: #selector(runInvocation:) withObject: comObject waitUntilDone: NO];
// in someObject's implementation
-(void) runInvocation: (ComObject*) comObject
{
[[comObject invocation] invoke];
[self perfomSelector: #selctor(invocationComplete:)
onThread: [comObject callingThread]
withObject: [comObject invocation]];
}
If you don't like to create a new class to pass the thread and the invocation, use an NSDictionary instead e.g.
comObject = [NSDictionary dictionaryWithObjectsAndKeys: invocation, "#invocation" [NSThread currentThread], #"thread", nil];
Be careful about object ownership. The various performSelector... methods retain both the receiver and the object until they are done but with asynchronous calls there might be a small window in which they could disappear if you are not careful.
Have you looked into Distributed Objects?
They're generally used for inter-process communication, but there's no real reason it can't be constrained to a single process with multiple threads. Better yet, if you go down this path, your design will trivially scale to multiple processes and even multiple machines.
You are also given the option of specifying behaviour by means of additional keywords like oneway, in, out, inout, bycopy and byref. An article written by David Chisnall (of GNUstep fame) explains the rationale for these.
All that said, the usual caveats apply: are you sure you need a threaded design, etc. etc? There are alternatives, such as using NSOperation (doc here) and NSOperationQueue, which allow you to explicitly state dependencies and let magic solve them for you. Perhaps have a good read of Apple's Concurrency Programming Guide to get a handle (no pun intended) on your options.
I only suggest this as you mentioned trying traditional POSIX threads, which leads me to believe that you may be trying to apply knowledge gleaned from other OSes and not taking full advantage of what OS X has to offer.
It seem that the only implementation that provide Safe Cross-Thread Signals for both the Signal class and what's being called in the slot is QT. (Maybe I'm wrong?).
But I cannot use QT in the project I'm doing. So how could I provide safe Slots call from a different thread (Using Boost::signals2 for example)? Are mutex inside the slot the only way? I think signals2 protect themself but not what's being done inside the slot.
Thanks
You can combine boost::bind and boost ASIO to create Cross-Thread Calls.
# In Thread 2
boost::asio::io_service service;
boost::asio::io_service::work work (service); // so io service won't stop if there is no work
service.run() # starting work thread
# In Thread 1
service.post (boost::bind (&YourClass::function, &yourClassInstance, parameter1, parameter2))
Thread 2 will go into a loop and will execute your bound function. I think you can also call Boost::Signals2 calls into this loop.
But keep care: If you do cross-thread-signaling, make sure that the destination object still exists when being called. You can garantuee that by dropping all connections in your targets destructor (not in their base class destructor, also see Signals-Trackable Class)
I do not like Boost::Signals2 oo much, because of its very long stack trace and compile times (blog post).
It's not a signals-slots implementation, exactly, but there's a C++ implementation of Twisted's Deferred pattern that accomplishes a similar goal to a cross-thread signal-slot mechanism. If someone doesn't come along and post a better solution, it might be worth a look: http://sourceforge.net/projects/deferred/