I'm running the process asynchronously using QEvent, after my process completes I'm just setting promise and trying to get it using future get function.
However it is not working asynchronously, it's working synchronously.
May I know what wrong I am doing?
code:
Win::Win(QObject *parent) :
MyCustomWidget(parent)
{
qDebug()<<execute().get();
}
std::future<int> Win::execute()
{
auto promise = std::make_shared<std::promise<int>>();
auto temp = [=]
{
///process
qDebug()<<"done";
promise->set_value(100);
};
QApplication::postEvent(this,new MyCustomEvent(temp));
////QApplication::processEvents() without this line its not working may know the reason
//// But without this line and without future get also its working
return promise->get_future();
}
Related
I want to run some bunch of codes asynchronously in c++. This is for an gtk GUI application. I want to get the length from a encoder to an variable while running the other parts of the code. This lines of code should be always running. When i want the length, i should be able to get the current length from the variable. Can any one help me on this.
I haven't understood what exactly you want to do. But I think you can read more about the std::async.
#include <iostream>
#include <future>
void asyncFunction ()
{
std::cout << "I am inside async function\n";
}
int main()
{
std::future<void> fn = std::async(std::launch::async, asyncFunction);
// here some other main thread operations
return 0;
}
Function that is run asynchronously can also return a value, which can be accessed through the future with std::future::get blocking method.
During testing and debugging of an app, i noticed there was an Exception that mostly happens during debug testing only, inside a for-loop that iterates over a list:
[ERROR:flutter/lib/ui/ui_dart_state.cc(177)] Unhandled Exception: Concurrent modification during iteration: Instance(length:0) of '_GrowableList'.
I have searched around and found that it mostly happens if you change the list itself during the iteration, but i cannot see where it happens in the code:
Main function:
static Future<void> save(EntryModel entry) async {
...
List<TagModel> tagsList = entry.tags;
List<int> tagIdsInserted = [];
if (tagsList != null && tagsList.isNotEmpty) {
for (TagModel tag in tagsList) {
//Error happens inside this loop
int tagIdInserted = await TagContract.save(tag); //this function does not alter the tag in any way.
if (tagIdInserted == null || tagIdInserted <= 0) {
throw Exception('Invalid TagID!');
}
tagIdsInserted.add(tagIdInserted);
}
}
What happen is during the first iteration it runs fine, but the second or third one the List<TagModel> tagsList suddenly becomes empty, including from the initial object (the entry passed to the function).
Also i noticed that during runs without debugging it runs mostly fine, but i am not sure if that is because i am not catching the error.
Thanks in advance.
Try to avoid using await inside a loop, it is just too dangerous.
You have to understand how asynchronous code execute. If an await is encountered and the Future is unable to return synchronously, the runtime will suspend the execution of this function and jump to whatever other jobs that are on the top of the queue.
So when the await is encountered, the runtime will start executing some god-knows-where code and those code touched your tagsList.
Try to understand the following example. This will directly triggers the exception.
void main() {
List<int> ids = [1,2,3];
test(ids);
ids.add(1); // If the async function get suspended, this becomes the top of the queue.
}
void test(List<int> ids) async {
for (final id in ids) {
await Future.delayed(Duration(milliseconds: 10));
}
}
In async programming, avoid writing an await who depends on exposed shared states.
For a list of async tasks, always prepare them in an Iterable<Future>, then use Future.wait to synchronize them and get the result in a single await.
For your code
final results = await Future.wait(tagsList.map((tag)=>TagContract.save(tag)))
I am working on developing an UWP application which would load the file from Application local data on click of a Button. For this, I need the StorageFolder object for Application LocalFolder using StorageFolder::GetFolderFromPathAsync() method then i will have to use GetFileAsync() method to read the StorageFile object to read.
I have written the templates to wait for the response from async methods like GetFolderFromPathAsync(), GetFileAsync(), etc. before proceeding.
template <typename T>
T syncAsyncTask(concurrency::task<T> mainTask) {
std::shared_ptr<std::promise<T>> done = std::make_shared<std::promise<T>>();
auto future = done->get_future();
asyncTaskExceptionHandler<T>(mainTask, [&done](bool didFail, T result) {
done->set_value(didFail ? nullptr : result);
});
future.wait();
return future.get();
}
template <typename T, typename CallbackLambda>
void asyncTaskExceptionHandler(concurrency::task<T> mainTask, CallbackLambda&& onResult) {
auto t1 = mainTask.then([onResult = std::move(onResult)](concurrency::task<T> t) {
bool didFail = true;
T result;
try {
result = t.get();
didFail = false;
}
catch (concurrency::task_canceled&) {
OutputDebugStringA("Win10 call was canceled.");
}
catch (Platform::Exception^ e) {
OutputDebugStringA("Error during a Win10 call:");
}
catch (std::exception&) {
OutputDebugStringA("There was a C++ exception during a Win10 call.");
}
catch (...) {
OutputDebugStringA("There was a generic exception during a Win10 call.");
}
onResult(didFail, result);
});
}
Issue :
When i call syncAsyncTask() method with any task to get
its response, it keeps waiting at future.wait() as mainTask never
complete and promise never set its value.
See below code :
void testStorage::MainPage::Btn_Click(Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e)
{
Windows::Storage::StorageFolder^ localFolder = Windows::Storage::ApplicationData::Current->LocalFolder;
auto task = concurrency::create_task(Windows::Storage::StorageFolder::GetFolderFromPathAsync(localFolder->Path));
auto folder = syncAsyncTask<Windows::Storage::StorageFolder^>(task);
printString(folder->Path);
}
void printString(Platform::String^ text) {
std::wstring fooW(text->Begin());
std::string fooA(fooW.begin(), fooW.end());
const char* charStr = fooA.c_str();
OutputDebugStringA(charStr);
}
Running environment :
VS2017
Tried with C++14 and C++17, facing same issue.
Windows 10 RS5 Build#17763
Has anyone ever faced this issue?
Please help!! Thanks in advance.
I was able to take the above code and create a simple application that reproduced this issue. Long story short, I was able to get future.wait() to return by telling the continuation in asyncTaskExceptionHandler to run on a background thread:
template <typename T, typename CallbackLambda>
void asyncTaskExceptionHandler(concurrency::task<T> mainTask, CallbackLambda&& onResult) {
// debug
printString(mainTask.is_apartment_aware().ToString());
auto t1 = mainTask.then([onResult = std::move(onResult)](concurrency::task<T> t) {
bool didFail = true;
T result;
try {
result = t.get();
didFail = false;
}
catch (concurrency::task_canceled&) {
OutputDebugStringA("Win10 call was canceled.");
}
catch (Platform::Exception^ e) {
OutputDebugStringA("Error during a Win10 call:");
}
catch (std::exception&) {
OutputDebugStringA("There was a C++ exception during a Win10 call.");
}
catch (...) {
OutputDebugStringA("There was a generic exception during a Win10 call.");
}
// It works with this
}, concurrency::task_continuation_context::use_arbitrary());
}
Assuming the code I used was correct, what I believe to be happening is that we created a deadlock. What we are saying with the above code is:
On the UI/STA thread, create/handle an async operation from GetFolderFromPathAsync
Pass this task off to our syncAsyncTask, which in turn passes this off to asyncTaskExceptionHandler.
asyncTaskExceptionHandler adds a continuation to this task which schedules it to run. By default, tasks run on the thread that called them. In this case, it is the UI/STA thread!
Once the thread is scheduled, we return back to syncAsyncTask to finish. After our call to asyncTaskExceptionHandler we have a future.wait() which blocks until the promise value is set.
This prevents our UI thread from finished execution of the syncAsyncTask, but also prevents our continuation from running since it is scheduled to run on the same thread that is blocking!
In other words, we are waiting on the UI thread for an operation to complete that cannot begin until the UI thread is finished, thus causing our deadlock.
By using concurrency::task_continuation_context::use_arbitrary() we tell the task that it's okay to use a background thread if necessary (which in this case it is) and everything completes as intended.
For documentation on this, as well as some example code illustrating async behavior, see the Creating Asynchronous Operations in C++ for UWP Apps documentation.
I'm trying to write my own torrent program based on libtorrent rasterbar and I'm having problems getting the alert mechanism working correctly. Libtorrent offers function
void set_alert_notify (boost::function<void()> const& fun);
which is supposed to
The intention of of the function is that the client wakes up its main thread, to poll for more alerts using pop_alerts(). If the notify function fails to do so, it won't be called again, until pop_alerts is called for some other reason.
so far so good, I think I understand the intention behind this function. However, my actual implementation doesn't work so good. My code so far is like this:
std::unique_lock<std::mutex> ul(_alert_m);
session.set_alert_notify([&]() { _alert_cv.notify_one(); });
while (!_alert_loop_should_stop) {
if (!session.wait_for_alert(std::chrono::seconds(0))) {
_alert_cv.wait(ul);
}
std::vector<libtorrent::alert*> alerts;
session.pop_alerts(&alerts);
for (auto alert : alerts) {
LTi_ << alert->message();
}
}
however there is a race condition. If wait_for_alert returns NULL (since no alerts yet) but the function passed to set_alert_notify is called before _alert_cw.wait(ul);, the whole loop waits forever (because of second sentence from the quote).
For the moment my solution is just changing _alert_cv.wait(ul); to _alert_cv.wait_for(ul, std::chrono::milliseconds(250)); which reduces number of loops per second enough while keeping latency low enough.
But it's really more workaround then solution and I keep thinking there must be proper way to handle this.
You need a variable to record the notification. It should be protected by the same mutex that owns the condition variable.
bool _alert_pending;
session.set_alert_notify([&]() {
std::lock_guard<std::mutex> lg(_alert_m);
_alert_pending = true;
_alert_cv.notify_one();
});
std::unique_lock<std::mutex> ul(_alert_m);
while(!_alert_loop_should_stop) {
_alert_cv.wait(ul, [&]() {
return _alert_pending || _alert_loop_should_stop;
})
if(_alert_pending) {
_alert_pending = false;
ul.unlock();
session.pop_alerts(...);
...
ul.lock();
}
}
I'm just getting into concurrent programming. Most probably my issue is very common, but since I can't find a good name for it, I can't google it.
I have a C++ UWP application where I try to apply MVVM pattern, but I guess that the pattern or even being UWP is not relevant.
First, I have a service interface that exposes an operation:
struct IService
{
virtual task<int> Operation() = 0;
};
Of course, I provide a concrete implementation, but it is not relevant for this discussion. The operation is potentially long-running: it makes an HTTP request.
Then I have a class that uses the service (again, irrelevant details omitted):
class ViewModel
{
unique_ptr<IService> service;
public:
task<void> Refresh();
};
I use coroutines:
task<void> ViewModel::Refresh()
{
auto result = co_await service->Operation();
// use result to update UI
}
The Refresh function is invoked on timer every minute, or in response to a user request. What I want is: if a Refresh operation is already in progress when a new one is started or requested, then abandon the second one and just wait for the first one to finish (or time out). In other words, I don't want to queue all the calls to Refresh - if a call is already in progress, I prefer to skip a call until the next timer tick.
My attempt (probably very naive) was:
mutex refresh;
task<void> ViewModel::Refresh()
{
unique_lock<mutex> lock(refresh, try_to_lock);
if (!lock)
{
// lock.release(); commented out as harmless but useless => irrelevant
co_return;
}
auto result = co_await service->Operation();
// use result to update UI
}
Edit after the original post: I commented out the line in the code snippet above, as it makes no difference. The issue is still the same.
But of course an assertion fails: unlock of unowned mutex. I guess that the problem is the unlock of mutex by unique_lock destructor, which happens in the continuation of the coroutine and on a different thread (other than the one it was originally locked on).
Using Visual C++ 2017.
use std::atomic_bool:
std::atomic_bool isRunning = false;
if (isRunning.exchange(true, std::memory_order_acq_rel) == false){
try{
auto result = co_await Refresh();
isRunning.store(false, std::memory_order_release);
//use result
}
catch(...){
isRunning.store(false, std::memory_order_release);
throw;
}
}
Two possible improvements : wrap isRunning.store in a RAII class and use std::shared_ptr<std::atomic_bool> if the lifetime if the atomic_bool is scoped.