Free memory using QFuture on cancel thread - c++

I am writing a program that uses QtConcurrent to start threads. In my case, I use it to render a QGraphicsView when I use the mouse scroll.
I am starting the threads using the following code:
if (future.isRunning()) {
future.cancel();
}
future = QtConcurrent::run(this,&AeVectorLayer::render, renderparams, pos);
watcher.setFuture(future);
When the thread is finished I catch the signal finished with a QfutureWatcher.
This is my render function:
QList<AeGraphicsItem*> AeVectorLayer::render(Ae::renderParams renderparams, int pos)
{
AeVectorHandler *coso = new AeVectorHandler();
coso->openDataset(AeLayer::absoluteFilePath);
coso->setTransformCoordinates(myEPSG);
QList<AeGraphicsItem*> bla = coso->getItems(renderparams.sceneRect.x(),
renderparams.sceneRect.y(), renderparams.sceneRect.width(),
renderparams.sceneRect.height(), renderparams.zoom, color, this);
for (int i = 0; i < bla.size(); i++)
bla.at(i)->setZValue((qreal)pos);
delete coso;
return bla;
}
As you can see, I have a QList<QGraphicsItem*> in my render function. How can I destroy this list when the future is cancelled? I undestand that in my code I am redefining the future variable but I do not know how to avoid it.

Stop trying to manually manage memory and instead use a smart pointer that fits your use case. Because you use move-unaware QFuture, you will need an std::shared_ptr. Once the QFuture/QFutureWatchers go out of scope, and you hold no more shared_ptr instances, the resource will be deleted. So in your case your render function should return a QList<std::shared_ptr<AeGraphicsItem>>. Be careful when you transfer ownership from the shared_ptrs to e.g. a QGraphicsScene: you must release from the shared_ptr on ownership transfer.
Note that your isRunning check followed by cancel is fundamentally flawed: the future could be running when you call isRunning but be finished by the time you call cancel. If you want to cancel it, just call cancel. Also note that you cannot meaningfully cancel QFutures returned by QtConcurrent::run so what you're doing is very very wrong in and of itself.

Related

Can't save CDocument in a worker thread -- object is destroyed from memory before thread starts

Overview
I need to save a CDocument in a background worker thread. There is a point in our MFC application which prompts the user to save before continuing. Normally, they are able to continue without saving, and there is no problem. However, occasionally, we need that document later in the process, so if the user clicks "No", we want to save a temp version of the file in the background without making the user wait for the save to continue.
Problem
When I launch AfxBeginThread(SaveDocumentThread, &threadInput) the &threadinput has been cleared from memory before the SaveDocumentThread starts.
Code
BOOL SPackagerDoc::OnSaveDocument( IN LPCTSTR lpszPathName)
{
ProcessDocumentThreadInput threadInput(this, lpszPathName);
// Temp Save Mode
if (m_bTempMode)
{
m_TempSaveThread = AfxBeginThread(SaveDocumentThread, &threadInput);
// This fixes the problem, but is considered unstable
// if (m_TempSaveThread->m_hThread)
// WaitForSingleObject(m_TempSaveThread->m_hThread, 500);
return TRUE;
}
// Normal save mode
SFileLoadingDialog loadingDialog(SFileLoadingDialog::SAVE, lpszPathName, SaveDocumentThread, &threadInput);
BOOL result = (BOOL)loadingDialog.DoModal();
return result;
}
StUInt32 SPackagerDoc::SaveDocumentThread(IN StVoid* pParam)
{
ProcessDocumentThreadInput* input = (ProcessDocumentThreadInput*)pParam;
ASSERT_NOT_NULL(input);
ASSERT_NOT_NULL(input->pPackager);
ASSERT_NOT_NULL(input->pszPathName);
CString path_name(input->pszPathName);
BOOL result = input->pPackager->SPackagerDocBase::OnSaveDocument(path_name);
return result;
}
If I uncommend WaitForSingleObject(..., 500); then the thread starts, all the information is present, and there are no errors. But if I remove those lines then in SaveDocumentThread input is NULL and all data is zeros or garbage.
Is there a way to ensure the SaveDocumentThread has started before moving on. IE, wait for thread to start, but not for a specified amount of time (500 ms). It may be that 500 ms is not a sufficient wait time on some other computers.
Is there an "official" way to do this?
This is the issue of the scope of variable.
Following comments specified the scope of local variable threadInput.
ProcessDocumentThreadInput threadInput(this, lpszPathName); // <=== threadInput created
if (m_bTempMode)
{
m_TempSaveThread = AfxBeginThread(SaveDocumentThread, &threadInput);
// This fixes the problem, but is considered unstable
// if (m_TempSaveThread->m_hThread)
// WaitForSingleObject(m_TempSaveThread->m_hThread, 500);
return TRUE; // <=== threadInput destructed
}
Your workaround WaitForSingleObject() delays the destruction of the variable threadInput and you see the result.
To overcome the scope of local variable.
Store it in a class member variable.
Store it as a (better be smart) pointer and (better not to) handle it's destruction.
Edit:
As #Jabberwocky stated, function OnSaveDocument() might be called more than twice since it's called by background thread.
I'll suggest to refactor the save() function out and let if and else to call them seperately.
As others have pointed out, the problem is the lifetime of threadInput ends before the thread begins.
You can dynamically allocate the instance of ProcessDocumentThreadInput and pass the pointer to that instance to the thread.
auto* threadInput = new ProcessDocumentThreadInput(this, lpszPathName);
...
AfxBeginThread(SaveDocumentThread, threadInput);
However, in this case, the responsibility to release the memory gets messy.
Since you put C++11 tag in your question, you might want to make use of std::shared_ptr or std::unique_ptr and pass it to the thread, which would land you in using std::thread instead of AfxBeginThread. (BTW, I have no experience using MFC.)
BOOL SPackagerDoc::OnSaveDocument( IN LPCTSTR lpszPathName)
{
...
std::thread t(SaveDocumentThread, std::make_unique<ProcessDocumentThreadInput>(this, lpszPathName));
...
}
...
StUInt32 SaveDocumentThread(std::unique_ptr<ProcessDocumentThreadInput>&& threadInput)
{
...
}

What is the correct way to pass a member variable into a lambda function that will be executed in another thread in c++?

In my application I have an event thread and a render thread. I wrote a custom thread class that is able to execute tasks (function pointers) in a fixed interval so I can pass work between different threads if needed.
Here is the setup:
In the event thread I parse a file that will create models, but before I can create models I have to delete the old ones, so naturally I just add the task of clearing the models like this:
WindowThread::getInstance()->addTask([this]() {
this->viewport->clearModels();
});
This function will eventually be executed by the window thread, which seems to work fine, however when debugging with valgrind it reports the following error: valgrind invalid read of size 8 in the line where clearModels() is called. After a bit of googling the issue seems to be that the viewport pointer (or this, I don't know for sure) is outside of the memory address area of the window thread, which makes sense since the lambda function was created in the event thread.
Is there a way to fix this "error" by somehow moving the pointer/lambda into the other threads memory area?
FThread::addTask(const std::function<void()> &task) adds the given task to a std::queue (which is locked by a mutex beforehand) from the thread it is called in. Eventually the queue will be processed by the thread the task was added to.
void FThread::addTask(const std::function<void()> &task)
{
if (this->m_running && this->m_taskQueueMode != QUEUE_DISABLED)
{
this->m_taskQueueMutex.lock();
this->m_backTaskQueue->push(task);
this->m_taskQueueMutex.unlock();
}
}
void FThread::processTaskQueue()
{
this->m_taskQueueMutex.lock();
std::queue<std::function<void()>> *tmp = this->m_frontTaskQueue;
this->m_frontTaskQueue = this->m_backTaskQueue;
this->m_backTaskQueue = tmp;
this->m_taskQueueMutex.unlock();
while (!this->m_frontTaskQueue->empty())
{
this->m_frontTaskQueue->front()();
this->m_frontTaskQueue->pop();
}
}
The task queue is setup in a double buffered way so processing the current tasks doesn't block adding new ones.
EDIT:
The clearModels() method is just deleting every pointer in a vector and then clearing the vector.
void Viewport::clearModels()
{
if (!this->models.empty())
{
for (auto *model : this->models)
delete model;
this->models.clear();
}
this->hiddenModels.clear();
}

Thread is not working properly

I have a class Machine with some member function. In the makeProduct I make a thread that calls t_make and then returns. While the thread is doing it's work in the member function I still want to use Machine(status check, resource left, etc.)
I started like this
//machine.h
private
int stat;
std::thread t;
std::mutex m;
bool working;
//machine.cpp
int Machine::makeProduct(){
if(working == true) return -1;
t = std::thread(&Machine::t_make, this);
return 0;
}
void Machine::t_make(){
std::lock_guard<std::mutex> guard(m);
//do some time-consuming work, change "stat" in progress
}
void Machine::Status(int &copStat){
copStat = stat;
}
Machine::~Machine(){ if(t.joinable()) t.join; }
//main.cpp
...
Machine m;
m.makeProduct();
int getStat = 0;
m.Status(getStat);
if(getStat == 1) cout<< "Product in making";
...
The problem is that when I call makeProduct() and right after that Status() the copStat doesn't change, indicate that any work was done.
Am I using the t or t_make wrong? I tried posting lock_guard in every method but the threads don't intertwine. Or maybe the t.join() at the wrong time, but let me just mention that if I place 't.join' right after using t = std::thread(&Machine::t_make, this); and everything work out fine.
When you call Status() right after you call getProduct(), there's a good chance that the new thread hasn't started doing anything yet. You are still in the original thread, and the new thread has to set up and start running.
Your join in the destructor is not really meaningful for this exercise. If you wanted to make sure to collect the result and do something with it as Machine goes out of scope it may make sense, but it isn't meaningful to your question about checking Status. If you want Status() to only return you the value after t_make() is finished, then moving your join() code to Status would work.
Look at the Futures in the standing threading library http://en.cppreference.com/w/cpp/thread#Futures. These are utilities for executing asynchronous tasks and getting the result when the task is complete.
If t_make is modifying 'stat', then your Status function should acquire the lock before using 'stat' in the assignment of the copStat. The memory access is currently unsafe.
As the code stand right now, if you're expecting the t_make call to be complete before calling Status, there is nothing forcing this to happen. As is, two separate threads will be autonomously completing these actions - 1 thread calling t_make and 1 thread calling Status. There is no guarantee as to what order this happens in. (this changes if you add a lock to Status)
Also, could you update your example to show how you're determing that copStat is never populated?

Is it safe to work on object that delete later was called on

I was thinking about writing method like this:
QString getData() {
QNetworkReply *reply = getReply();
reply->deleteLater();
return QString::fromUtf8(reply->readAll()).trimmed();
}
Is it safe?
If I'm forced to write this like this:
QString getData() {
QNetworkReply *reply = getReply();
QString result = QString::fromUtf8(reply->readAll()).trimmed();
reply->deleteLater();
return result;
}
I'm copping QString twice (am I?, once it's put into result and second when returning it by value), which I wanted to avoid.
From the deleteLater docs:
Schedules this object for deletion.
The object will be deleted when control returns to the event loop. If the event loop is not running when this function is called (e.g. deleteLater() is called on an object before QCoreApplication::exec()), the object will be deleted once the event loop is started.
So what you are doing there is safe. Obviously handing out references or pointers to that object (or its members) that might be persisted is wrong. But if you're returning copies, you're fine.
But what you're doing might or might not do what you want to do. readAll doesn't block, it returns the data currently available. Meaning that a single call to readAll might only read a partial response - unless you've ensured that all data has arrived through other means.
Other things to note, from the docs:
Note that entering and leaving a new event loop (e.g., by opening a modal dialog) will not perform the deferred deletion; for the object to be deleted, the control must return to the event loop from which deleteLater() was called.
So the only thing to worry about when doing this type of thing would be calling functions that somehow re-enters the "current" event loop. But that won't happen if that is done via QCoreApplication::processEvents:
In event you are running a local loop which calls this function continuously, without an event loop, the DeferredDelete events will not be processed.
So that's covered too. The deferred deletion logic is pretty complex, but safe under normal circumstances. If you're digging very deep into the Qt internals (or calling code that might do something fishy there), be defensive. But for normal code flow, deleteLater is safe as long as you don't have dangling references (or pointers) that might persist.
What does deleteLater do? From it's name, I would expect that it registers the object for deletion at some later point in time (end of transaction? end of session?). If so, you can safely use it as long as that later point in time has not occurred. The only issue is knowing when that point occurs, but for things like end of transaction or end of session, you're probably safeā€”if your function was called within a transaction or session, the transaction or session will not end until you return.
It is safe, but you'd better not use deleteLater at all, because
The object will be deleted when control returns to the event loop. If
the event loop is not running when this function is called (e.g.
deleteLater() is called on an object before QCoreApplication::exec()),
the object will be deleted once the event loop is started.
means that object can be deleted mmm... never. This pretend to work like GC, but it is even worse:
class A: public QObject
{
char x[10000000];
};
void process()
{
A* a = new A();
//delete a;
a->deleteLater();
}
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
for (int k = 0; k < 1000000; ++k) {
process();
}
return a.exec();
}
At least it is not C++ idiomatic, that uses RAII.
From the other size, copying QString is a cheap operation, because QSring uses copy-on-write ideome.

Thread-Safe implementation of an object that deletes itself

I have an object that is called from two different threads and after it was called by both it destroys itself by "delete this".
How do I implement this thread-safe? Thread-safe means that the object never destroys itself exactly one time (it must destroys itself after the second callback).
I created some example code:
class IThreadCallBack
{
virtual void CallBack(int) = 0;
};
class M: public IThreadCallBack
{
private:
bool t1_finished, t2_finished;
public:
M(): t1_finished(false), t2_finished(false)
{
startMyThread(this, 1);
startMyThread(this, 2);
}
void CallBack(int id)
{
if (id == 1)
{
t1_finished = true;
}
else
{
t2_finished = true;
}
if (t1_finished && t2_finished)
{
delete this;
}
}
};
int main(int argc, char **argv) {
M* MObj = new M();
while(true);
}
Obviously I can't use a Mutex as member of the object and lock the delete, because this would also delete the Mutex. On the other hand, if I set a "toBeDeleted"-flag inside a mutex-protected area, where the finised-flag is set, I feel unsure if there are situations possible where the object isnt deleted at all.
Note that the thread-implementation makes sure that the callback method is called exactly one time per thread in any case.
Edit / Update:
What if I change Callback(..) to:
void CallBack(int id)
{
mMutex.Obtain()
if (id == 1)
{
t1_finished = true;
}
else
{
t2_finished = true;
}
bool both_finished = (t1_finished && t2_finished);
mMutex.Release();
if (both_finished)
{
delete this;
}
}
Can this considered to be safe? (with mMutex being a member of the m class?)
I think it is, if I don't access any member after releasing the mutex?!
Use Boost's Smart Pointer. It handles this automatically; your object won't have to delete itself, and it is thread safe.
Edit:
From the code you've posted above, I can't really say, need more info. But you could do it like this: each thread has a shared_ptr object and when the callback is called, you call shared_ptr::reset(). The last reset will delete M. Each shared_ptr could be stored with thread local storeage in each thread. So in essence, each thread is responsible for its own shared_ptr.
Instead of using two separate flags, you could consider setting a counter to the number of threads that you're waiting on and then using interlocked decrement.
Then you can be 100% sure that when the thread counter reaches 0, you're done and should clean up.
For more info on interlocked decrement on Windows, on Linux, and on Mac.
I once implemented something like this that avoided the ickiness and confusion of delete this entirely, by operating in the following way:
Start a thread that is responsible for deleting these sorts of shared objects, which waits on a condition
When the shared object is no longer being used, instead of deleting itself, have it insert itself into a thread-safe queue and signal the condition that the deleter thread is waiting on
When the deleter thread wakes up, it deletes everything in the queue
If your program has an event loop, you can avoid the creation of a separate thread for this by creating an event type that means "delete unused shared objects" and have some persistent object respond to this event in the same way that the deleter thread would in the above example.
I can't imagine that this is possible, especially within the class itself. The problem is two fold:
1) There's no way to notify the outside world not to call the object so the outside world has to be responsible for setting the pointer to 0 after calling "CallBack" iff the pointer was deleted.
2) Once two threads enter this function you are, and forgive my french, absolutely fucked. Calling a function on a deleted object is UB, just imagine what deleting an object while someone is in it results in.
I've never seen "delete this" as anything but an abomination. Doesn't mean it isn't sometimes, on VERY rare conditions, necessary. Problem is that people do it way too much and don't think about the consequences of such a design.
I don't think "to be deleted" is going to work well. It might work for two threads, but what about three? You can't protect the part of code that calls delete because you're deleting the protection (as you state) and because of the UB you'll inevitably cause. So the first goes through, sets the flag and aborts....which of the rest is going to call delete on the way out?
The more robust implementation would be to implement reference counting. For each thread you start, increase a counter; for each callback call decrease the counter and if the counter has reached zero, delete the object. You can lock the counter access, or you could use the Interlocked class to protect the counter access, though in that case you need to be careful with potential race between the first thread finishing and the second starting.
Update: And of course, I completely ignored the fact that this is C++. :-) You should use InterlockExchange to update the counter instead of the C# Interlocked class.