I have
struct MyWidget : QWidget {
// non-GUI related stuff:
int data;
int doSth();
};
I need to access a MyWidget instance from another thread (i.e. not the main thread). Is there any way to do that safely? I understand that I cannot access GUI related functions because some backends (e.g. MacOSX/Cocoa) don't support that. However, I only need to access data or doSth() in this example. But from what I have understand, there is simply no way to guarantee the lifetime of the object - i.e. if the parent window with that widget closes, the MyWidget instance gets deleted.
Or is there a way to guarantee the lifetime? I guess QSharedPointer doesn't work because the QWidget does its lifetime handling internally, depending on the parent widget. QPointer of course also doesn't help because it is only weak and there is no locking mechanism.
My current workaround is basically:
int widget_doSth(QPointer<MyWidget> w) {
int ret = -1;
execInMainThread_sync([&]() {
if(w)
ret = w->doSth();
});
return ret;
}
(execInMainThread_sync works by using QMetaMethod::invoke to call a method in the main thread.)
However, that workaround doesn't work anymore for some specific reason (I will explain later why, but that doesn't matter here). Basically, I am not able to execute something in the main thread at that point (for some complicated deadlock reasons).
Another workaround I'm currently thinking about is to add a global mutex which will guard the MyWidget destructor, and in the destructor, I'm cleaning up other weak references to the MyWidget. Then, elsewhere, when I need to ensure the lifetime, I just lock that mutex.
The reason why my current workaround doesn't work anymore (and that is still a simplified version of the real situation):
In MyWidget, the data is actually a PyObject*.
In the main thread, some Python code gets called. (It's not really possible to avoid any Python code calls at all in the main thread in my app.) That Python code ends up doing some import, which is guarded by some Python-import-mutex (Python doesn't allow parallel imports.)
In some other Python thread, some other import is called. That import now locks the Python-import-mutex. And while it's doing its thing, it does some GC cleanup at some point. That GC cleanup calls the traverse function of some object which holds that MyWidget. Thus, it must access the MyWidget. However, execInMainThread_sync (or equivalently working solutions) will deadlock because the main thread currently waits for the Python-import-lock.
Note: The Python global interpreter lock is not really the problem. Of course it gets unlocked before any execInMainThread_sync call. However, I cannot really check for any other potential Python/whatever locks. Esp. I am not allowed to just unlock the Python-import-lock -- it's there for a reason.
One solution you might think of is to really just avoid any Python code at all in the main thread. But that has a lot of drawbacks, e.g. it will be slow, complicated and ugly (the GUI basically only shows data from Python, so there need to be a huge proxy/wrapper around it all). And I think I still need to wait at some points for the Python data, so I just introduce the possible deadlock-situation at some other point.
Also, all the problems would just go away if I could access MyWidget safely from another thread. Introducing a global mutex is the much cleaner and shorter solution, compared to above.
You can use the signal/slot mechanism, but it can be tedious, if the number of GUI controls is large. I'd recommend a single signal and slot to control the gui. Send over a struct with all the info needed for updating the GUI.
void SomeWidget::updateGUISlot(struct Info const& info)
{
firstControl->setText(info.text);
secondControl->setValue(info.value);
}
You don't need to worry about emitting signals, if the recipient is deleted. This detail is handled by Qt. Alternatively, you can wait for your threads to exit, after exiting the GUI threads event loop. You'll need to register the struct with Qt.
EDIT:
From what I've read from your extended question, you're problems are related to communication between threads. Try pipes, (POSIX) message queues, sockets or POSIX signals instead of Qt signals for inter-thread communication.
Personally I don't like designs where GUI stuff (ie: A widget) has non-GUI related stuff... I think you should separate these two from each other. Qt needs to keep the GUI objects always on the main thread, but anything else (QObject derived) can be moved to a thread (QObject::moveToThread).
It seems that what you're explaining has nothing at all to do with widgets, Qt, or anything like that. It's a problem inherent to Python and its threading and the lock structure that doesn't make sense if you're multithreading. Python basically presumes that any object can be accessed from any thread. You'd have the same problem using any other toolkit. There may be a way of telling Python not to do that - I don't know enough about the cpython implementation's details, but that's where you'd need to look.
That GC cleanup calls the traverse function of some object which holds that MyWidget
That's your problem. You must ensure that such cross-thread GC cleanup can't happen. I have no idea how you'd go about it :(
My worry is that you've quietly and subtly shot yourself in the foot by using Python, in spite of everyone claiming that only C/C++ lets you do it at such a grand scale.
My solution:
struct MyWidget : QWidget {
// some non-GUI related stuff:
int someData;
virtual void doSth();
// We reset that in the destructor. When you hold its mutex-lock,
// the ref is either NULL or a valid pointer to this MyWidget.
struct LockedRef {
boost::mutex mutex;
MyWidget* ptr;
LockedRef(MyWidget& w) : ptr(&w) {}
void reset() {
boost::mutex::scoped_lock lock(mutex);
ptr = NULL;
}
};
boost::shared_ptr<LockedRef> selfRef;
struct WeakRef;
struct ScopedRef {
boost::shared_ptr<LockedRef> _ref;
MyWidget* ptr;
bool lock;
ScopedRef(WeakRef& ref);
~ScopedRef();
operator bool() { return ptr; }
MyWidget* operator->() { return ptr; }
};
struct WeakRef {
typedef boost::weak_ptr<LockedRef> Ref;
Ref ref;
WeakRef() {}
WeakRef(MyWidget& w) { ref = w.selfRef; }
ScopedRef scoped() { return ScopedRef(*this); }
};
MyWidget();
~MyWidget();
};
MyWidget::ScopedRef::ScopedRef(WeakRef& ref) : ptr(NULL), lock(true) {
_ref = ref.ref.lock();
if(_ref) {
lock = (QThread::currentThread() == qApp->thread());
if(lock) _ref->mutex.lock();
ptr = _ref->ptr;
}
}
MyWidget::ScopedRef::~ScopedRef() {
if(_ref && lock)
_ref->mutex.unlock();
}
MyWidget::~QtBaseWidget() {
selfRef->reset();
selfRef.reset();
}
MyWidget::MyWidget() {
selfRef = boost::shared_ptr<LockedRef>(new LockedRef(*this));
}
Now, everywhere I need to pass around a MyWidget pointer, I'm using:
MyWidget::WeakRef widget;
And I can use it from another thread like this:
MyWidget::ScopedRef widgetRef(widget);
if(widgetRef)
widgetRef->doSth();
This is safe. As long as ScopedRef exists, MyWidget cannot be deleted. It will block in its destructor. Or it is already deleted and ScopedRef::ptr == NULL.
Related
I have a scenario where an anonymous QObject starts an asynchronous operation by emitting a signal. The receiving slot stores the QObject-pointer and sets a property of this object later. The object could be gone meanwhile.
So, is there a safe way to check if this pointer still valid?
P.S.:
I'm aware of QObject::destroyed signal, which I could connect to the object supposed to call the setProperty of that pointer. But I wonder, if it works easier.
This is a great question, but it is the wrong question.
Is there a way to check if the pointer is valid? Yes. QPointer is designed specifically to do that.
But the answer to this question is useless if the object lives in another thread! You only know whether it's valid at a single point in time - the answer is not valid immediately afterwards.
Absent other mechanisms, it is useless to hold a QPointer to an object in a different thread - it won't help you. Why? Look at this scenario:
Thread A Thread B
1. QPointer returns a non-zero pointer
2. deletes the object
3. Use the now-dangling pointer
I'm aware of QObject::destroyed signal, which I could connect to the object supposed to call the setProperty of that pointer. But I wonder, if it works easier.
The destroyed signals are useless when sent using queued connections - whether within a thread, or across thread boundaries. They are meant to be used within one thread, using direct connections.
By the time the target thread's event loop picks up the slot call, the originating object is long gone. Worse - this is always the case in a single-threaded application. The reason for the problem is the same as with the QPointer: the destroyed signal indicates that the object is no longer valid, but it doesn't mean that it was valid before you received the signal unless you're using a direct connection (and are in the same thread) or you're using a blocking queued connection.
Using the blocking queued connection, the requesting object's thread will block until the async thread finishes reacting to object's deletion. While this certainly "works", it forces the two threads to synchronize on a resource with sparse availability - the front spot in the async thread's event loop. Yes, this is literally what you compete for - a single spot in a queue that can be arbitrarily long. While this might be OK for debugging, it has no place in production code unless it's OK to block either thread to synchronize.
You are trying to work very hard around the fact that you're passing a QObject pointer between threads, and the object's lifetime, from the point of view of the receiving thread, is uncontrolled. That's your problem. You'd solve everything by not passing a raw object pointer. Instead, you could pass a shared smart pointer, or using signal-slot connections: those vanish whenever either end of the connection is destructed. That's what you'd want.
In fact, Qt's own design patterns hint at this. QNetworkReply is a QObject not only because it is a QIODevice, but because it must be to support direct indications of finished requests across thread boundaries. In light of a multitude of requests being processed, connecting to QNetworkAccessManager::finished(QNetworkReply*) can be a premature pessimization. Your object gets notified of a possibly very large number of replies, but it really is only interested in one or very few of them. Thus there must be a way to notify the requester directly that its one and only request is done - and that's what QNetworkReply::finished is for.
So, a simple way to proceed is to make the Request be a QObject with a done signal. When you ready the request, connect the requesting object to that signal. You can also connect a functor, but make sure that the functor executes in the requesting object's context:
// CORRECT
connect(request, &Request::done, requester, [...](...){...});
// WRONG
connect(request, &Request::done, [...](...){...});
The below demonstrates how it could be put together. The requests' lifetimes are managed through the use of a shared (reference-counting) smart pointer. This makes life rather easy. We check that no requests exist at the time main returns.
#include <QtCore>
class Request;
typedef QSharedPointer<Request> RequestPtr;
class Request : public QObject {
Q_OBJECT
public:
static QAtomicInt m_count;
Request() { m_count.ref(); }
~Request() { m_count.deref(); }
int taxIncrease;
Q_SIGNAL void done(RequestPtr);
};
Q_DECLARE_METATYPE(RequestPtr)
QAtomicInt Request::m_count(0);
class Requester : public QObject {
Q_OBJECT
Q_PROPERTY (int catTax READ catTax WRITE setCatTax NOTIFY catTaxChanged)
int m_catTax;
public:
Requester(QObject * parent = 0) : QObject(parent), m_catTax(0) {}
Q_SLOT int catTax() const { return m_catTax; }
Q_SLOT void setCatTax(int t) {
if (t != m_catTax) {
m_catTax = t;
emit catTaxChanged(t);
}
}
Q_SIGNAL void catTaxChanged(int);
Q_SIGNAL void hasRequest(RequestPtr);
void sendNewRequest() {
RequestPtr req(new Request);
req->taxIncrease = 5;
connect(req.data(), &Request::done, this, [this, req]{
setCatTax(catTax() + req->taxIncrease);
qDebug() << objectName() << "has cat tax" << catTax();
QCoreApplication::quit();
});
emit hasRequest(req);
}
};
class Processor : public QObject {
Q_OBJECT
public:
Q_SLOT void process(RequestPtr req) {
QThread::msleep(50); // Pretend to do some work.
req->taxIncrease --; // Figure we don't need so many cats after all...
emit req->done(req);
emit done(req);
}
Q_SIGNAL void done(RequestPtr);
};
struct Thread : public QThread { ~Thread() { quit(); wait(); } };
int main(int argc, char ** argv) {
struct C { ~C() { Q_ASSERT(Request::m_count == 0); } } check;
QCoreApplication app(argc, argv);
qRegisterMetaType<RequestPtr>();
Processor processor;
Thread thread;
processor.moveToThread(&thread);
thread.start();
Requester requester1;
requester1.setObjectName("requester1");
QObject::connect(&requester1, &Requester::hasRequest, &processor, &Processor::process);
requester1.sendNewRequest();
{
Requester requester2;
requester2.setObjectName("requester2");
QObject::connect(&requester2, &Requester::hasRequest, &processor, &Processor::process);
requester2.sendNewRequest();
} // requester2 is destructed here
return app.exec();
}
#include "main.moc"
It is impossible to check is that pointer still valid. So, the only safe way here is to inform receiving part about deleting of that QObject (and in multithreading case: before accessing to object you need to check and block it to be sure, that the object will not be deleted in another thread right after check). The reason of it is simple:
Theoretically it is possible that after deleting of initial object, system will put another object in that memory (so pointer will look like valid).
Or it is possible that object will be deleted, but it's memory will not be overwritten by something else, so it still will look like valid (but it fact it will be invalid).
So, there are no any way to detect is that pointer valid, if you have only pointer. You need something more.
Also it is not safe to just send a signal about deleting of object in multithreading case (or to use QObject::destroyed as you suggested). Why? Because it is possible, that things happens in this order:
QObject send a message "a am going to be deleted",
QObject deleted,
your receiving code uses that pointer (and this is wrong and dangerous),
your receiving code receives message "a am going to be deleted" (too late).
So, in case of only one thread you need QPointer. Else you need something like QSharedPointer or QWeakPointer (both of them are thread-safe) - see answer of Kuba Ober.
I'm implementing a Signal/Slot framework, and got to the point that I want it to be thread-safe. I already had a lot of support from the Boost mailing-list, but since this is not really boost-related, I'll ask my pending question here.
When is a signal/slot implementation (or any framework that calls functions outside itself, specified in some way by the user) considered thread-safe? Should it be safe w.r.t. its own data, i.e. the data associated to its implementation details? Or should it also take into account the user's data, which might or might not be modified whatever functions are passed to the framework?
This is an example given on the mailing-list (Edit: this is an example use-case --i.e. user code--. My code is behind the calls to the Emitter object):
int * somePtr = nullptr;
Emitter<Event> em; // just an object that can emit the 'Event' signal
void mainThread()
{
em.connect<Event>(someFunction);
// now, somehow, 2 threads are created which, at some point
// execute the thread1() and thread2() functions below
}
void someFunction()
{
// can somePtr change after the check but before the set?
if (somePtr)
*somePtr = 17;
}
void cleanupPtr()
{
// this looks safe, but compilers and CPUs can reorder this code:
int *tmp = somePtr;
somePtr = null;
delete tmp;
}
void thread1()
{
em.emit<Event>();
}
void thread2()
{
em.disconnect<Event>(someFunction);
// now safe to cleanup (?)
cleanupPtr();
}
In the above code, it might happen that Event is emitted, causing someFunction to be executed. If somePtr is non-null, but becomes null just after the if, but before the assignment, we're in trouble. From the point of view of thread2, this is not obvious because it is disconnecting someFunction before calling cleanupPtr.
I can see why this could potentially lead to trouble, but who's responsibility is this? Should my library protect the user from using it in every irresponsible but imaginable way?
I suspect there is no clearly good answer, but clarity will come from documenting the guarantees you wish to make about concurrent access to an Emitter object.
One level of guarantee, which to me is what is implied by a promise of thread safety, is that:
Concurrent operations on the object are guaranteed to leave the object in a consistent state (at least, from the point of view of the accessing threads.)
Non-commutative operations will be performed as if they were scheduled serially in some (unknown) order.
Then the question is, what does the emit method promise semantically: passing control to the connected routine, or evaluation of the function? If the former, then your work sounds like it is already done; if the latter, then the 'as-if ordered' requirement would mean that you need to enforce some level of synchronisation.
Users of the library can work with either, provided it is clear what is being promised.
Firstly the simplest possibility: If you don't claim your library to be thread-safe, you don't have to bother about this.
(But even) if you do:
In your example the user would have to take care about thread-safety, since both functions could be dangerous, even without using your event-system (IMHO, this is a pretty good way to determine who should take care about those kind of problems). A possible way for him to do this in C++11 could be:
#include <mutex>
// A mutex is used to control thread-acess to a shared resource
std::mutex _somePtr_mutex;
int* somePtr = nullptr;
void someFunction()
{
/*
Create a 'lock_guard' to manage your mutex.
Is the mutex '_somePtr_mutex' already locked?
Yes: Wait until it's unlocked.
No: Lock it and continue execution.
*/
std::lock_guard<std::mutex> lock(_somePtr_mutex);
if(somePtr)
*somePtr = 17;
// End of scope: 'lock' gets destroyed and hence unlocks '_somePtr_mutex'
}
void cleanupPtr()
{
/*
Create a 'lock_guard' to manage your mutex.
Is the mutex '_somePtr_mutex' already locked?
Yes: Wait until it's unlocked.
No: Lock it and continue execution.
*/
std::lock_guard<std::mutex> lock(_somePtr_mutex);
int *tmp = somePtr;
somePtr = null;
delete tmp;
// End of scope: 'lock' gets destroyed and hence unlocks '_somePtr_mutex'
}
The last question is easy. If you say your library is threadsafe, it should threadsafe. It makes no sense to say it is partly threadsafe or, it is only threadsafe if you do not abuse it. In that case you have to explain what exactly is not threadsafe.
Now to your first question regarded someFunction:
The operation is non atomic. Which means the CPU can interrupt between the if and the assigment. And that will happen, I know that :-) The other thread can erase the pointer anytime. Even between two short and fast looking statements.
Now to cleanupPtr:
I am not a compiler expert, but if you want to be shure that your assigment take place in the same moment you wrote it in code you should write the keyword volatile in front of the declaration of somePtr. The compiler will now know that you use that attribute in a multithreaded situation and will not buffer the value in a register of the CPU.
If you have a thread situation with a reader thread and a writer thread, the keyword volatile can (IMHO) be enough to sync them. As long as the attributes you use to exchange information between threads are generic.
For other situations you can use mutex or atomics. I will give you an example for mutex. I use C++11 for that, but it works similar with previous versions of C++ using boost.
Using mutex:
int * somePtr = nullptr;
Emitter<Event> em; // just an object that can emit the 'Event' signal
std::recursive_mutex g_mutex;
void mainThread()
{
em.connect<Event>(someFunction);
// now, somehow, 2 threads are created which, at some point
// execute the thread1() and thread2() functions below
}
void someFunction()
{
std::lock_guard<std::recursive_mutex> lock(g_mutex);
// can somePtr change after the check but before the set?
if (somePtr)
*somePtr = 17;
}
void cleanupPtr()
{
std::lock_guard<std::recursive_mutex> lock(g_mutex);
// this looks safe, but compilers and CPUs can reorder this code:
int *tmp = somePtr;
somePtr = null;
delete tmp;
}
void thread1()
{
em.emit<Event>();
}
void thread2()
{
em.disconnect<Event>(someFunction);
// now safe to cleanup (?)
cleanupPtr();
}
I only added a recursive mutex here without changing any other code of the sample, even if it's now cargo code.
There are two kinds of mutex in the std. A utterly useless std::mutex and the std::recursive_mutex which work like you expect a mutex should work. The std::mutex exclude the access of any further call even from the same thread. Which can happen if a method which needs mutex protection calls a public method which use the same mutex. std::recursive_mutex is reentrant for the same thread.
Atomics (or interlocks in win32) are another way, but only to exchange values between threads or access them concurrently. Your example is missing such values, but in your case, I would look a little deeper in them (std::atomic).
UPDATE
If your are the user of a library which is not explicit declared as threadsafe by the developer, take it as non threadsafe and shield every call to it with a mutex lock.
To stick with the example. If you cannot change someFunction the you have to wrap the function like:
void threadsafeSomeFunction()
{
std::lock_guard<std::recursive_mutex> lock(g_mutex);
someFunction();
}
I need your help with wxWidgets. I have 2 threads (1 wxTimer and 1 wxThread), I need communicate between this 2 threads. I have a class that contains methods to read/write variable in this class. (Share Memory with this object)
My problem is: I instanciate with "new" this class in one thread but I don't know that necessary in second thread. Because if instanciate too, adress of variable are differents and I need communicate so I need even value in variable :/
I know about need wxSemaphore to prevent error when to access same time.
Thanks you for your help !
EDIT: My code
So, I need make a link with my code. Thanks you for all ;)
It's my declaration for my wxTimer in my class: EvtFramePrincipal (IHM)
In .h
EvtFramePrincipal( wxWindow* parent );
#include <wx/timer.h>
wxTimer m_timer;
in .cpp -Constructor EvtFramePrincipal
EvtFramePrincipal::EvtFramePrincipal( wxWindow* parent )
:
FramePrincipal( parent ),m_timer(this)
{
Connect(wxID_ANY,wxEVT_TIMER,wxTimerEventHandler(EvtFramePrincipal::OnTimer),NULL,this);
m_timer.Start(250);
}
So I call OnTimer method every 250ms with this line.
For my second thread start from EvtFramePrincipal (IHM):
in .h EvtFramePrincipal
#include "../Client.h"
Client *ClientIdle;
in .cpp EvtFramePrincipal
ClientIdle= new Client();
ClientIdle->Run();
In .h Client (Thread)
class Client: public wxThread
public:
Client();
virtual void *Entry();
virtual void OnExit();
In .cpp Client (Thread)
Client::Client() : wxThread()
{
}
So here, no probleme, thread are ok ?
Now I need that this class that use like a messenger between my 2 threads.
#ifndef PARTAGE_H
#define PARTAGE_H
#include "wx/string.h"
#include <iostream>
using std::cout;
using std::endl;
class Partage
{
public:
Partage();
virtual ~Partage();
bool Return_Capteur_Aval()
{ return Etat_Capteur_Aval; }
bool Return_Capteur_Amont()
{ return Etat_Capteur_Amont; }
bool Return_Etat_Barriere()
{ return Etat_Barriere; }
bool Return_Ouverture()
{ return Demande_Ouverture; }
bool Return_Fermeture()
{ return Demande_Fermeture; }
bool Return_Appel()
{ return Appel_Gardien; }
void Set_Ouverture(bool Etat)
{ Demande_Ouverture=Etat; }
void Set_Fermeture(bool Etat)
{ Demande_Fermeture=Etat; }
void Set_Capteur_Aval(bool Etat)
{ Etat_Capteur_Aval=Etat; }
void Set_Capteur_Amont(bool Etat)
{ Etat_Capteur_Amont=Etat; }
void Set_Barriere(bool Etat)
{ Etat_Barriere=Etat; }
void Set_Appel(bool Etat)
{ Appel_Gardien=Etat; }
void Set_Code(wxString valeur_code)
{ Code=valeur_code; }
void Set_Badge(wxString numero_badge)
{ Badge=numero_badge; }
void Set_Message(wxString message)
{
Message_Affiche=wxT("");
Message_Affiche=message;
}
wxString Get_Message()
{
return Message_Affiche;
}
wxString Get_Code()
{ return Code; }
wxString Get_Badge()
{ return Badge; }
protected:
private:
bool Etat_Capteur_Aval;
bool Etat_Capteur_Amont;
bool Etat_Barriere;
bool Demande_Ouverture;
bool Demande_Fermeture;
bool Appel_Gardien;
wxString Code;
wxString Badge;
wxString Message_Affiche;
};
#endif // PARTAGE_H
So in my EvtFramePrincipal(wxTimer), I make a new for this class. But in other thread (wxThread), what I need to do to communicate ?
If difficult to understand so sorry :/
Then main thread should create first the shared variable. After it, you can create both threads and pass them a pointer to the shared variable.
So, both of them, know how interact with the shared variable. You need to implement a mutex or wxSemaphore in the methods of the shared variable.
You can use a singleton to get access to a central object.
Alternatively, create the central object before creating the threads and pass the reference to the central object to threads.
Use a mutex in the central object to prevent simultaneous access.
Creating one central object on each thread is not an option.
EDIT 1: Adding more details and examples
Let's start with some assumptions. The OP indicated that
I have 2 threads (1 wxTimer and 1 wxThread)
To tell the truth, I know very little of the wxWidgets framework, but there's always the documentation. So I can see that:
wxTimer provides a Timer that will execute the wxTimer::Notify() method when the timer expires. The documentation doesn't say anything about thread-execution (although there's a note A timer can only be used from the main thread which I'm not sure how to understand). I can guess that we should expect the Notify method will be executed in some event-loop or timer-loop thread or threads.
wxThread provides a model for Thread execution, that runs the wxThread::Entry() method. Running a wxThread object will actually create a thread that runs the Entry method.
So your problem is that you need same object to be accessible in both wxTimer::Notify() and wxThread::Entry() methods.
This object:
It's not one variable but a lot of that store in one class
e.g.
struct SharedData {
// NOTE: This is very simplistic.
// since the information here will be modified/read by
// multiple threads, it should be protected by one or more
// mutexes
// so probably a class with getter/setters will be better suited
// so that access with mutexes can be enforced within the class.
SharedData():var2(0) { }
std::string var1;
int var2;
};
of which you have somewhere an instance of that:
std::shared_ptr<SharedData> myData=std::make_shared<SharedData>();
or perhaps in pointer form or perhaps as a local variable or object attribute
Option 1: a shared reference
You're not really using wxTimer or wxThread, but classes that inherit from them (at least the wxThread::Entry() is pure virtual. In the case of wxTimer you could change the owner to a different wxEvtHandler that will receive the event, but you still need to provide an implementation.
So you can have
class MyTimer: public wxTimer {
public:
void Notify() {
// Your code goes here
// but it can access data through the local reference
}
void setData(const std::shared_ptr<SharedData> &data) {
mLocalReference=data
}
private:
std::shared_ptr<SharedData> mLocalReferece
};
That will need to be set:
MyTimer timer;
timer.setData(myData);
timer.StartOnece(10000); // wake me up in 10 secs.
Similarly for the Thread
class MyThread: public wxThread {
public:
void Entry() {
// Your code goes here
// but it can access data through the local reference
}
void setData(const std::shared_ptr<SharedData> &data) {
mLocalReference=data
}
private:
std::shared_ptr<SharedData> mLocalReferece
};
That will need to be set:
MyThread *thread=new MyThread();
thread->setData(myData);
thread->Run(); // threads starts running.
Option2 Using a singleton.
Sometimes you cannot modify MyThread or MyTimer... or it is too difficult to route the reference to myData to the thread or timer instances... or you're just too lazy or too busy to bother (beware of your technical debt!!!)
We can tweak the SharedData into:
struct SharedData {
std::string var1;
int var2;
static SharedData *instance() {
// NOTE that some mutexes are needed here
// to prevent the case where first initialization
// is executed simultaneously from different threads
// allocating two objects, one of them leaked.
if(!sInstance) {
sInstance=new SharedData();
}
return sInstance
}
private:
SharedData():var2(0) { } // Note we've made the constructor private
static SharedData *sInstance=0;
};
This object (because it only allows the creation of a single object) can be accessed from
either MyTimer::Notify() or MyThread::Entry() with
SharedData::instance()->var1;
Interlude: why Singletons are evil
(or why the easy solution might bite you in the future).
What is so bad about singletons?
Why Singletons are Evil
Singletons Are Evil
My main reasons are:
There's one and only one instance... and you might think that you only need one now, but who knows what the future will hold, you've taken an easy solution for a coding problem that has far reaching consequences architecturally and that might be difficult to revert.
It will not allow doing dependency injection (because the actual class is used in the accessing the object).
Still, I don't think is something to completely avoid. It has its uses, it can solve your problem and it might save your day.
Option 3. Some middle ground.
You could still organize your data around a central repository with methods to access different instances (or different implementations) of the data.
This central repository can be a singleton (it is really is central, common and unique), but is not the shared data, but what is used to retrieve the shared data, e.g. identified by some ID (that might be easier to share between the threads using option 1)
Something like:
CentralRepository::instance()->getDataById(sharedId)->var1;
EDIT 2: Comments after OP posted (more) code ;)
It seems that your object EvtFramePrincipal will execute both the timer call back and it will contain the ClientIdle pointer to a Client object (the thread)... I'd do:
Make the Client class contain a Portage attribute (a pointer or a smart pointer).
Make the EvtFramePrincipal contain a Portage attribute (a pointer or smart pointer). I guess this will have the lifecycle of the whole application, so the Portage object can share that lifecycle too.
Add Mutexes locking to all methods setting and getting in the Portage attribute, since it can be accessed from multiple threads.
After the Client object is instantiated set the reference to the Portage object that the EvtFramePrincipal contains.
Client can access Portage because we've set its reference when it was created. When the Entry method is run in its thread it will be able to access it.
EvtFramePrincipal can access the Portage (because it is one of its attributes), so the event handler for the timer event will be able to access it.
I have an object that is called from two different threads and after it was called by both it destroys itself by "delete this".
How do I implement this thread-safe? Thread-safe means that the object never destroys itself exactly one time (it must destroys itself after the second callback).
I created some example code:
class IThreadCallBack
{
virtual void CallBack(int) = 0;
};
class M: public IThreadCallBack
{
private:
bool t1_finished, t2_finished;
public:
M(): t1_finished(false), t2_finished(false)
{
startMyThread(this, 1);
startMyThread(this, 2);
}
void CallBack(int id)
{
if (id == 1)
{
t1_finished = true;
}
else
{
t2_finished = true;
}
if (t1_finished && t2_finished)
{
delete this;
}
}
};
int main(int argc, char **argv) {
M* MObj = new M();
while(true);
}
Obviously I can't use a Mutex as member of the object and lock the delete, because this would also delete the Mutex. On the other hand, if I set a "toBeDeleted"-flag inside a mutex-protected area, where the finised-flag is set, I feel unsure if there are situations possible where the object isnt deleted at all.
Note that the thread-implementation makes sure that the callback method is called exactly one time per thread in any case.
Edit / Update:
What if I change Callback(..) to:
void CallBack(int id)
{
mMutex.Obtain()
if (id == 1)
{
t1_finished = true;
}
else
{
t2_finished = true;
}
bool both_finished = (t1_finished && t2_finished);
mMutex.Release();
if (both_finished)
{
delete this;
}
}
Can this considered to be safe? (with mMutex being a member of the m class?)
I think it is, if I don't access any member after releasing the mutex?!
Use Boost's Smart Pointer. It handles this automatically; your object won't have to delete itself, and it is thread safe.
Edit:
From the code you've posted above, I can't really say, need more info. But you could do it like this: each thread has a shared_ptr object and when the callback is called, you call shared_ptr::reset(). The last reset will delete M. Each shared_ptr could be stored with thread local storeage in each thread. So in essence, each thread is responsible for its own shared_ptr.
Instead of using two separate flags, you could consider setting a counter to the number of threads that you're waiting on and then using interlocked decrement.
Then you can be 100% sure that when the thread counter reaches 0, you're done and should clean up.
For more info on interlocked decrement on Windows, on Linux, and on Mac.
I once implemented something like this that avoided the ickiness and confusion of delete this entirely, by operating in the following way:
Start a thread that is responsible for deleting these sorts of shared objects, which waits on a condition
When the shared object is no longer being used, instead of deleting itself, have it insert itself into a thread-safe queue and signal the condition that the deleter thread is waiting on
When the deleter thread wakes up, it deletes everything in the queue
If your program has an event loop, you can avoid the creation of a separate thread for this by creating an event type that means "delete unused shared objects" and have some persistent object respond to this event in the same way that the deleter thread would in the above example.
I can't imagine that this is possible, especially within the class itself. The problem is two fold:
1) There's no way to notify the outside world not to call the object so the outside world has to be responsible for setting the pointer to 0 after calling "CallBack" iff the pointer was deleted.
2) Once two threads enter this function you are, and forgive my french, absolutely fucked. Calling a function on a deleted object is UB, just imagine what deleting an object while someone is in it results in.
I've never seen "delete this" as anything but an abomination. Doesn't mean it isn't sometimes, on VERY rare conditions, necessary. Problem is that people do it way too much and don't think about the consequences of such a design.
I don't think "to be deleted" is going to work well. It might work for two threads, but what about three? You can't protect the part of code that calls delete because you're deleting the protection (as you state) and because of the UB you'll inevitably cause. So the first goes through, sets the flag and aborts....which of the rest is going to call delete on the way out?
The more robust implementation would be to implement reference counting. For each thread you start, increase a counter; for each callback call decrease the counter and if the counter has reached zero, delete the object. You can lock the counter access, or you could use the Interlocked class to protect the counter access, though in that case you need to be careful with potential race between the first thread finishing and the second starting.
Update: And of course, I completely ignored the fact that this is C++. :-) You should use InterlockExchange to update the counter instead of the C# Interlocked class.
I have an object for which I'd like to track the number of threads that reference it. In general, when any method on the object is called I can check a thread local boolean value to determine whether the count has been updated for the current thread. But this doesn't help me if the user say, uses boost::bind to bind my object to a boost::function and uses that to start a boost::thread. The new thread will have a reference to my object, and may hold on to it for an indefinite period of time before calling any of its methods, thus leading to a stale count. I could write my own wrapper around boost::thread to handle this, but that doesn't help if the user boost::bind's an object that contains my object (I can't specialize based on the presence of a member type -- at least I don't know of any way to do that) and uses that to start a boost::thread.
Is there any way to do this? The only means I can think of requires too much work from users -- I provide a wrapper around boost::thread that calls a special hook method on the object being passed in provided it exists, and users add the special hook method to any class that contains my object.
Edit: For the sake of this question we can assume I control the means to make new threads. So I can wrap boost::thread for example and expect that users will use my wrapped version, and not have to worry about users simultaneously using pthreads, etc.
Edit2: One can also assume that I have some means of thread local storage available, through __thread or boost::thread_specific_ptr. It's not in the current standard, but hopefully will be soon.
In general, this is hard. The question of "who has a reference to me?" is not generally solvable in C++. It may be worth looking at the bigger picture of the specific problem(s) you are trying to solve, and seeing if there is a better way.
There are a few things I can come up with that can get you partway there, but none of them are quite what you want.
You can establish the concept of "the owning thread" for an object, and REJECT operations from any other thread, a la Qt GUI elements. (Note that trying to do things thread-safely from threads other than the owner won't actually give you thread-safety, since if the owner isn't checked it can collide with other threads.) This at least gives your users fail-fast behavior.
You can encourage reference counting by having the user-visible objects being lightweight references to the implementation object itself [and by documenting this!]. But determined users can work around this.
And you can combine these two-- i.e. you can have the notion of thread ownership for each reference, and then have the object become aware of who owns the references. This could be very powerful, but not really idiot-proof.
You can start restricting what users can and cannot do with the object, but I don't think covering more than the obvious sources of unintentional error is worthwhile. Should you be declaring operator& private, so people can't take pointers to your objects? Should you be preventing people from dynamically allocating your object? It depends on your users to some degree, but keep in mind you can't prevent references to objects, so eventually playing whack-a-mole will drive you insane.
So, back to my original suggestion: re-analyze the big picture if possible.
Short of a pimpl style implementation that does a threadid check before every dereference I don't see how you could do this:
class MyClass;
class MyClassImpl {
friend class MyClass;
threadid_t owning_thread;
public:
void doSomethingThreadSafe();
void doSomethingNoSafetyCheck();
};
class MyClass {
MyClassImpl* impl;
public:
void doSomethine() {
if (__threadid() != impl->owning_thread) {
impl->doSomethingThreadSafe();
} else {
impl->doSomethingNoSafetyCheck();
}
}
};
Note: I know the OP wants to list threads with active pointers, I don't think that's feasible. The above implementation at least lets the object know when there might be contention. When to change the owning_thread depends heavily on what doSomething does.
Usually you cannot do this programmatically.
Unfortuately, the way to go is to design your program in such a way that you can prove (i.e. convince yourself) that certain objects are shared, and others are thread private.
The current C++ standard does not even have the notion of a thread, so there is no standard portable notion of thread local storage, in particular.
If I understood your problem correctly I believe this could be done in Windows using Win32 function GetCurrentThreadId().
Below is a quick and dirty example of how it could be used. Thread synchronisation should rather be done with a lock object.
If you create an object of CMyThreadTracker at the top of every member function of your object to be tracked for threads, the _handle_vector should contain the thread ids that use your object.
#include <process.h>
#include <windows.h>
#include <vector>
#include <algorithm>
#include <functional>
using namespace std;
class CMyThreadTracker
{
vector<DWORD> & _handle_vector;
DWORD _h;
CRITICAL_SECTION &_CriticalSection;
public:
CMyThreadTracker(vector<DWORD> & handle_vector,CRITICAL_SECTION &crit):_handle_vector(handle_vector),_CriticalSection(crit)
{
EnterCriticalSection(&_CriticalSection);
_h = GetCurrentThreadId();
_handle_vector.push_back(_h);
printf("thread id %08x\n",_h);
LeaveCriticalSection(&_CriticalSection);
}
~CMyThreadTracker()
{
EnterCriticalSection(&_CriticalSection);
vector<DWORD>::iterator ee = remove_if(_handle_vector.begin(),_handle_vector.end(),bind2nd(equal_to<DWORD>(), _h));
_handle_vector.erase(ee,_handle_vector.end());
LeaveCriticalSection(&_CriticalSection);
}
};
class CMyObject
{
vector<DWORD> _handle_vector;
public:
void method1(CRITICAL_SECTION & CriticalSection)
{
CMyThreadTracker tt(_handle_vector,CriticalSection);
printf("method 1\n");
EnterCriticalSection(&CriticalSection);
for(int i=0;i<_handle_vector.size();++i)
{
printf(" this object is currently used by thread %08x\n",_handle_vector[i]);
}
LeaveCriticalSection(&CriticalSection);
}
};
CMyObject mo;
CRITICAL_SECTION CriticalSection;
unsigned __stdcall ThreadFunc( void* arg )
{
unsigned int sleep_time = *(unsigned int*)arg;
while ( true)
{
Sleep(sleep_time);
mo.method1(CriticalSection);
}
_endthreadex( 0 );
return 0;
}
int _tmain(int argc, _TCHAR* argv[])
{
HANDLE hThread;
unsigned int threadID;
if (!InitializeCriticalSectionAndSpinCount(&CriticalSection, 0x80000400) )
return -1;
for(int i=0;i<5;++i)
{
unsigned int sleep_time = 1000 *(i+1);
hThread = (HANDLE)_beginthreadex( NULL, 0, &ThreadFunc, &sleep_time, 0, &threadID );
printf("creating thread %08x\n",threadID);
}
WaitForSingleObject( hThread, INFINITE );
return 0;
}
EDIT1:
As mentioned in the comment, reference dispensing could be implemented as below. A vector could hold the unique thread ids referring to your object. You may also need to implement a custom assignment operator to deal with the object references being copied by a different thread.
class MyClass
{
public:
static MyClass & Create()
{
static MyClass * p = new MyClass();
return *p;
}
static void Destroy(MyClass * p)
{
delete p;
}
private:
MyClass(){}
~MyClass(){};
};
class MyCreatorClass
{
MyClass & _my_obj;
public:
MyCreatorClass():_my_obj(MyClass::Create())
{
}
MyClass & GetObject()
{
//TODO:
// use GetCurrentThreadId to get thread id
// check if the id is already in the vector
// add this to a vector
return _my_obj;
}
~MyCreatorClass()
{
MyClass::Destroy(&_my_obj);
}
};
int _tmain(int argc, _TCHAR* argv[])
{
MyCreatorClass mcc;
MyClass &o1 = mcc.GetObject();
MyClass &o2 = mcc.GetObject();
return 0;
}
The solution I'm familiar with is to state "if you don't use the correct API to interact with this object, then all bets are off."
You may be able to turn your requirements around and make it possible for any threads that reference the object subscribe to signals from the object. This won't help with race conditions, but allows threads to know when the object has unloaded itself (for instance).
To solve the problem "I have an object and want to know how many threads access it" and you also can enumerate your threads, you can solve this problem with thread local storage.
Allocate a TLS index for your object. Make a private method called "registerThread" which simply sets the thread TLS to point to your object.
The key extension to the poster's original idea is that during every method call, call this registerThread(). Then you don't need to detect when or who created the thread, it's just set (often redundantly) during every actual access.
To see which threads have accessed the object, just examine their TLS values.
Upside: simple and pretty efficient.
Downside: solves the posted question but doesn't extend smoothly to multiple objects or dynamic threads that aren't enumerable.