I am currently developing a program that needs to download some images from the socket server,and the downloading work will execute a long time. So, I create a new std::thread to do that.
Once it's downloaded,the std::thread will call a member function of current Class, but this Class is likely to have been released. So, I got a exception.
How to solve this problem?
void xxx::fun1()
{
...
}
void xxx::downloadImg()
{
...a long time
if(downloadComplete)
{
this->fun1();
}
}
void xxx::mainProcees()
{
std::thread* th = new thread(mem_fn(&xxx::downloadImg),this);
th->detach();
//if I use th->join(),the UI will be obstructed
}
Don't detach the thread. Instead, you can have a data member that hold a pointer to the thread, and join the thread in destructor.
class YourClass {
public:
~YourClass() {
if (_thread != nullptr) {
_thread->join();
delete _thread;
}
}
void mainProcees() {
_thread = new thread(&YourClass::downloadImg,this);
}
private:
thread *_thread = nullptr;
};
UPDATE
Just as #milleniumbug pointed out, you don't need dynamic allocation for the thread object, since it is movable. So the other solution is as follows.
class YourClass {
public:
~YourClass() {
if (_thread.joinable())
_thread.join();
}
void mainProcess() {
_thread = std::thread(&YourClass::downloadImg, this);
}
private:
std::thread _thread;
};
Related
I faced a problem with C++ memory management and smart pointers.
I have a code to demonstrate you the problem:
#include <memory>
class Closeable
{
public:
virtual void Close() = 0;
};
class DisconnectionHandler
{
public:
virtual void HandleDisconnection() = 0;
};
class EventHandler
{
public:
virtual void HandleEvent() = 0;
};
class Notifier
{
public:
virtual void OnDisconnection() = 0;
};
class RemoteSystem : public Closeable
{
public:
void SetReceiveDataEventHandler(const std::shared_ptr<EventHandler>& receive_data_event_handler) {
this->receive_data_event_handler_ = receive_data_event_handler;
}
void Close() override { this->receive_data_event_handler_ = nullptr; }
// In this example to simplify the code I just call this method from the main function.
void OnDataReceived() { this->receive_data_event_handler_->HandleEvent(); }
private:
std::shared_ptr<EventHandler> receive_data_event_handler_;
};
class ReceiveDataEventHandler : public EventHandler
{
public:
explicit ReceiveDataEventHandler(const std::shared_ptr<DisconnectionHandler>& disconnection_handler)
: disconnection_handler_(disconnection_handler) {}
void HandleEvent() override {
// Some code of receiving data.
// But we can find out that connection was closed and we must call the disconnection handler.
if (this->IsConnectionClosed()) {
this->disconnection_handler_->HandleDisconnection();
return;
}
// Some other stuff..
}
private:
[[nodiscard]] bool IsConnectionClosed() const {
// In the example code I just return true.
return true;
}
private:
const std::shared_ptr<DisconnectionHandler> disconnection_handler_;
};
class RemoteSystemDisconnectionHandler : public DisconnectionHandler
{
public:
explicit RemoteSystemDisconnectionHandler(const std::shared_ptr<Closeable>& closeable_remote_system,
Notifier* notifier)
: closeable_remote_system_(closeable_remote_system), notifier_(notifier) {}
~RemoteSystemDisconnectionHandler() { printf("Destructed.\n"); }
void HandleDisconnection() override {
this->closeable_remote_system_->Close();
printf("Closed.\n");
this->notifier_->OnDisconnection();
printf("Notified.\n");
}
private:
const std::shared_ptr<Closeable> closeable_remote_system_;
Notifier* const notifier_;
};
class ClientNotifier : public Notifier
{
public:
void OnDisconnection() override { printf("Disconnected.\n"); }
};
int main() {
ClientNotifier notifier;
auto remote_system = std::make_shared<RemoteSystem>();
{
// Scope for losing references in the main function after SetReceiveDataEventHandler.
auto disconnection_handler = std::make_shared<RemoteSystemDisconnectionHandler>(remote_system, ¬ifier);
auto receive_data_event_handler = std::make_shared<ReceiveDataEventHandler>(disconnection_handler);
remote_system->SetReceiveDataEventHandler(receive_data_event_handler);
}
// Only in the example.
remote_system->OnDataReceived();
return 0;
}
You can also run this code. In this example program crashes on the line this->notifier_->OnDisconnection(). The output of the program:
Destructed.
Closed.
*crash*
This occurs because of losing the last reference to the ReceiveDataEventHandler when calling method RemoteSystem::Close from RemoteSystemDisconnectionHandler::HandleDisconnection, accordingly, losing the reference to the RemoteSystemDisconnectionHandler and deleting this object. After the Close method and deleting both objects of classes RemoteSystemDisconnectionHandler and ReceiveDataEventHandler it returns to the RemoteSystemDisconnectionHandler::HandleDisconnection method and prints 'Closed.' to the output, but since the object has been already deleted, the next line occurs an error, because now this is deleted and any access to it occurs memory exception.
I also tried to rewrite this code on Java and it works fine, unlike C++.
So, I want to ask you guys if there is a solution for this problem in the C++ community?
I thought C++ had no problems with memory management since smart pointers exist, but appearently I was wrong.
Hope for your help!
Thanks in advance!
A simple solution is to make a copy of the shared_ptr before invoking the method on it:
void OnDataReceived()
{
auto temp = this->receive_data_event_handler_;
if (temp)
{
temp->HandleEvent();
}
}
temp will keep the pointer alive until after the method invocation has completed.
However note that if you are using multiple threads in your real code, std::shared_ptr is not thread safe so you need to introduce a mutex to protect access to receive_data_event_handler_:
class RemoteSystem : public Closeable
{
public:
void SetReceiveDataEventHandler(const std::shared_ptr<EventHandler>& receive_data_event_handler) {
this->receive_data_event_handler_ = receive_data_event_handler;
}
void Close() override
{
std::unique_lock lock(mutex);
this->receive_data_event_handler_ = nullptr;
}
// In this example to simplify the code I just call this method from the main function.
void OnDataReceived()
{
std::shared_ptr<EventHandler> temp;
{
std::unique_lock lock(mutex);
temp = this->receive_data_event_handler_;
}
if (temp)
{
temp->HandleEvent();
}
}
private:
std::shared_ptr<EventHandler> receive_data_event_handler_;
std::mutex mutex;
};
I have some code that is to run on Linux (x86_64) and Raspberry Pi. The code runs ok on Linux, but crashes with a SIGSEGV on RPi. A derived method appears to not have the vtable populated and I wonder whether it is as a result of a race condition?
Here are the classes that implement the threading:
class Runnable {
public:
Runnable() {}
virtual ~Runnable() {}
// Starts the thread & execute the doWork() method.
void start() {
thread = std::thread(&Runnable::doWork, this);
}
// Stop the thread if running
virtual void stop() = 0;
// Joins the thread, blocks until the thread has finished.
void join() {
if (thread.joinable()) { thread.join(); }
}
protected:
std::thread thread; ///< the thread used by this class
/**
* #brief The method that does work. This will be called when the thread is started.
* The implementation should not return until stop() is called (or is finished with work).
* This is in protected scope as it should not be called except from the start() method.
*/
virtual void doWork() = 0;
};
template <typename VIRTUAL_CLASS_TYPE>
class SomeRunner : public Runnable {
public:
SomeRunner() {}
virtual ~SomeRunner() {
// do not delete classPtr, as a framework we are using deletes it for us
}
virtual void doWork() {
// do some work
classPtr = new VIRTUAL_CLASS_TYPE();
// notify main thread that we have created classPtr
// this is done via a condition variable & mutex
notifyIsReady();
// do some more work
}
virtual void stop() {
// tell thread to stop work
}
VIRTUAL_CLASS_TYPE *getClassPtr() {
return classPtr;
}
protected:
VIRTUAL_CLASS_TYPE *classPtr = nullptr;
}
// Within InterfaceLibrary.a
class BaseClass {
public:
BaseClass() {}
virtual ~BaseClass() {}
virtual void someMethod() = 0;
}
//Within ImplementationLibrary.a:
class ImplClass : public BaseClass {
public:
ImplClass() : BaseClass() {}
virtual ~ImplClass() {}
virtual void someMethod() {
// do something here
}
}
// within main application project
class ClassThatCausesDump {
// ...
void someMethod(BaseClass *bc) {
bc->someMethod();
}
}
bool isReady = false;
std::mutex mutexIsReady;
std::condition_variable cvIsReady;
void notifyGlsIsReady() {
std::unique_lock<std::mutex> lock(mutexIsReady);
isReady = true;
lock();
cvIsReady.notify_one();
}
int main(int argc, char **argv) {
// ...
SomeRunner<ImplClass> runner;
runner.start();
// main blocks until notified that the ImplClass has been created by runner
std::unique_lock<std::mutex> isReadyLock(mutexIsReady);
cvIsReady.wait(isReadyLock, [] {return isReady;});
ClassThatCausesDump dump;
dump.someMethod(runner.getClassPtr()); // this triggers the core dump on RPi
The variable classPtr, although passed as a BaseClass should call the derived someMethod(). However, when I run using gdb, I see that the program crashes at the call to someMethod() within ClassThatCausesDump. It tries to execute an instruction at 0x000000.
The program works on a Linux VM within a powerful multicore OSX machine, but not on RPi. I cleaned everything then recompiled, then it worked suddenly on RPi. Then I cleaned again to make sure it wasn't a fluke, then it went back to crashing (it never crashes on Linux). That makes me think there's perhaps something funny about the way the class is created in a separate thread perhaps?
Any ideas, please?
EDIT:
I don't think this is an issue to do with threading, because a call to the offending virtual function is ok if done within the doWork() method.
virtual void doWork() {
// do some work
classPtr = new VIRTUAL_CLASS_TYPE();
VIRTUAL_CLASS_TYPE *ptr = getClassPtr();
ptr->someMethod(); // this is ok and does not crash
// This next line is only here for testing because we know that VIRTUAL_CLASS_TYPE is of type ImplClass!
BaseClass *bc = (BaseClass*)ptr;
bc->someMethod(); // this crashes
// notify main thread that we have created classPtr
// this is done via a condition variable & mutex
notifyIsReady();
// do some more work
}
To implement the logic when contructed object starts background thread for real work, I'm using a pattern like this (simplified):
class A {
std::thread t{&A::run, this};
std::atomic_bool done;
// variables are the question about
std::vector<std::thread> array_for_thread_management;
// ... and other members
protected:
void run() {
...
array_for_thread_management.push_back([](){...});
...
}
public:
A() = default;
// all other constructors deleted because of used
// some members like std::atomic_bool done;
~A() {
done = true;
bi::named_condition cnd{bi::open_only, "cnd"};
cnd.notify_one();
if (t.joinable())
t.join();
for(std::thread& worker : array_for_thread_management) {
if (worker.joinable()) worker.join();
}
}
};
If I'm adding a push of child threads in primary background thread into a vector in run() member, the object hangs on destructor.
even there is no real threads in a vector, just started this without connections from outside and try to stop this by destructor
Of course, once you have this pointer in your run method, you can access class members via this pointer. I guess the problem with your code is that the thread is spawned before any other members are initialized, as it is the first member in your class definition. I suspect with the following definition of class A you'll have no problems with accessing member variables:
class A {
std::atomic_bool done;
// variables are the question about
int i;
std::string s;
std::vector<std::string> v;
// and only after everything above is initialized:
std::thread t{&A::run, this}; // spawn a thread
// ...
}
However, personally I would prefer having a separate method start() which spawns a thread to spawning it inside class constructor implicitly. It may look like this:
class A
{
std::unique_ptr<std::thread> t;
std::atomic<bool> some_flag;
public:
void start()
{
t.reset(new std::thread(&A::run, this));
}
private:
void run()
{
some_flag.store(true);
}
};
So I have a class which spawns a thread with the class object as parameter. Then in the thread I call a member function. I use Critical_Sections for synchronizing.
So would that implementation be thread safe? Because only the member is thread safe and not the class object.
class TestThread : public CThread
{
public:
virtual DWORD Work(void* pData) // Thread function
{
while (true)
{
if (Closing())
{
printf("Closing thread");
return 0;
}
Lock(); //EnterCritical
threadSafeVar++;
UnLock(); //LeaveCritical
}
}
int GetCounter()
{
int tmp;
Lock(); //EnterCritical
tmp = threadSafeVar;
UnLock(); //LeaveCritical
return tmp;
}
private:
int threadSafeVar;
};
.
.
.
TestThread thr;
thr.Run();
while (true)
{
printf("%d\n",thr.GetCounter());
}
If the member is your critical section you should only lock the access to it.
BTW, You can implement a Locker like:
class Locker
{
mutex &m_;
public:
Locker(mutex &m) : m_(m)
{
m.acquire();
}
~Locker()
{
m_.release();
}
};
And your code would look like:
mutex myVarMutex;
...
{
Locker lock(myVarMutex);
threadSafeVar++;
}
...
int GetCounter()
{
Locker lock(myVarMutex);
return threadSafeVar;
}
Your implementation is thread safe because you have protected with mutex your access to attribute.
Here, your class is a thread, so your object is a thread. It's what you do in your thread that tell if it is thread safe.
You get your value with a lock/unlock system and you write it with the same system. So your function is thread safe.
This is a question regarding coding design, so please forgive the long code listings: I could not resume these ideas and the potential pitfalls without showing the actual code.
I am writing a ConcurrentReferenceCounted class and would appreciate some feedback on my implementation. Sub-classes from this class will receive "release" instead of a direct delete.
Here is the class:
class ConcurrentReferenceCounted : private NonCopyable {
public:
ConcurrentReferenceCounted() : ref_count_(1) {}
virtual ~ConcurrentReferenceCounted() {}
void retain() {
ScopedLock lock(mutex_);
++ref_count_;
}
void release() {
bool should_die = false;
{
ScopedLock lock(mutex_);
should_die = --ref_count_ == 0;
}
if (should_die) delete this;
}
private:
size_t ref_count_;
Mutex mutex_;
};
And here is a scoped retain:
class ScopedRetain {
public:
ScopedRetain(ConcurrentReferenceCounted *object) : object_(object) {
retain();
}
ScopedRetain() : object_(NULL) {}
~ScopedRetain() {
release();
}
void hold(ConcurrentReferenceCounted *object) {
assert(!object_); // cannot hold more then 1 object
object_ = object;
retain();
}
private:
ConcurrentReferenceCounted *object_;
void release() {
if (object_) object_->release();
}
void retain() {
object_->retain();
}
};
And finally this is a use case:
Object *target;
ScopedRetain sr;
if (objects_.get(key, &target))
sr.hold(target);
else
return;
// use target
// no need to 'release'
Your ConcurrentReferenceCounted seems to use a full mutex, which is not necessary and not very fast. Reference counting can be implemented atomically using architecture-dependent interlocked instructions. Under Windows, the InterlockedXXXfamily of functions simply wraps these instructions.