Short explanation in bold.
I have a tbb::task that can be summed up like that:
class UpdateTask
: public tbb::task
{
public:
tbb::task* execute()
{
// some work...
if( can_continue() )
{
// make sure this is re-executed
this->recycle_to_reexecute(); //looks like it is deprecated but it's more clear to me
return nullptr; // ? or this? or tbb::empty_task?
}
return nullptr;
}
};
I want this task to be rescheduled as soon as its finished until a condition is filled.
I'm allocating this task this way:
UpdateTask& updater = *new(tbb::task::allocate_root()) UpdateTask();
Not sure it's related to the problem though.
The problem: When I run the code, I get assertions (in Debug) from tbb (last revision, 4.1.5) and I can't find a way to make it work correctly.
I've re-read the documentation but I can't find a simple example of this and it is not clear to me what I'm doing wrong.
I made some experiments:
With the code I just show, the assertion says that I should return a task:
Assertion t_next failed on line 485 of file ...\itbb\src\tbb\custom_scheduler.h
Detailed description: reexecution requires that method execute() return another
task
Then if I return this, the assertions says that the task should be allocated:
Assertion t_next->state()==task::allocated failed on line 452 of file ...\itbb\src\tbb\custom_scheduler.h
Detailed description: if task::execute() returns task, it must be marked as allo
cated
In doubt I tried to return a tbb::empty_task that I create on the fly (to check), allocated as new(tbb::task::allocate_child()) tbb::empty_task(). I got assertion with this message Assertion p.ref_count==0 failed on line 107 of file ...\itbb\src\tbb\custom_scheduler.h
Detailed description: completion of task caused predecessor's reference count to
underflow.
So, how to do it? I assume it is simple but can't find the way it is done.
Is it related to task reference counting?
Update: here is a full code that is a good aproximation of what I have:
#include <iostream>
#include <atomic>
#include <tbb/tbb.h>
namespace test { class UpdateTask : public tbb::task { public:
UpdateTask() { std::cout << "[UpdateTask]" << std::endl; }
~UpdateTask() { std::cout << "\n[/UpdateTask]" << std::endl; }
tbb::task* execute() { std::cout << "Tick "<< m_count <<std::endl;
++m_count;
// make sure this is re-executed this->recycle_to_reexecute(); //return new(tbb::task::allocate_continuation()) tbb::empty_task(); return nullptr; }
private:
std::atomic<int> m_count;
};
tbb::task_list m_task_list;
void register_update_task() { UpdateTask& updater =
*new(tbb::task::allocate_root()) UpdateTask(); m_task_list.push_back( updater ); }
void run_and_wait() { tbb::task::spawn_root_and_wait( m_task_list ); }
void tbb_test() { register_update_task(); run_and_wait();
}
}
int main()
{
test::tbb_test();
std::cout << "\nEND";
std::cin.ignore();
return 0;
}
So, here I got the first exception saying that I should return a task. In execute() function, if I replace the return by the one commented, then it appear to work, but there are two problems with this:
I have to create an empty_task task EVERY TIME a call to execute is done???
After the first call to execute(), the main thread is resumed. This is NOT what spawn_root_and_wait() is supposed to do. After all the task is not finished and is correctly rescheduled.
I must conclude it's not the right way to do this either.
this->recycle_to_reexecute();
is deprecated.
Replace by:
this->increment_ref_count();
this->recycle_as_safe_continuation();
Enjoy
P.S.: end of course(in yours case) return NULL from execute.
Related
i have a vector of objects std::vector and the fo object has a method start() where i create the thread specific to this object and now depends on a variable from this object i want to put it in sleep.
so for example if my object is f1 and the variable is bool sleep = false; when the sleep variable is true i want it to go to sleep.
i have tried this method but it doesn't seem to work. i think the if
class fo {
public :
thread* t ;
bool bedient = false , spazieren = false;
void start(){
t = new thread([this]() {
while (!this->bedient){
if (this->spazieren == true){
std::this_thread::sleep_for(std::chrono::seconds(10));
this->spazieren = false ;
}
}
this->join();
});
}
void join(){
t->join(); delete t;
}
};
You have "generated" a lot of problems on your code:
1)
Setting any kind of variable in one thread is potentially invisible in any other thread. If you want to make the other threads sees you changes in the first thread, you have to synchronize your memory. That can be done by using std::mutex with lock and unlock around every change of data or using std::atomic variables, which do the sync themselves or a lot of other methods. Please read a book about multi threaded programming!
2)
You try to join your own thread. That is not the correct usage at all. Any thread can join on others execution end but not on itself. That makes no sense!
3)
If you do not set manually the "sleep" var, your thread is running a loop and is simply doing nothing. A good method to heat up your core and the planet ;)
class fo {
public :
std::thread* t=nullptr ; // prevent corrupt delete if no start() called!
std::atomic<bool> bedient = false ;
std::atomic<bool> spazieren = false;
void start()
{
t = new std::thread([this]()
{
while (!this->bedient)
{
if (this->spazieren == true)
{
std::cout << "We are going to sleep" << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(3));
this->spazieren = false ;
}
}
std::cout << "End loop" << std::endl;
});
}
~fo() { delete t; }
void join()
{
std::cout << "wait for thread ends" << std::endl;
t->join();
}
};
int main()
{
fo f1;
f1.start();
sleep(1);
f1.spazieren = true;
sleep(1);
f1.bedient = true;
f1.join();
}
BTW:
Please do not use using namespace std!
Your design seems to be problematic. Setting vars from external threads to control execution of a thread is typically an abuse. You should think again for your design!
Manually using new/delete can be result in memory leaks.
Creating something with a start() method which later on will be deleted is mysterious. You should create all objects in the constructor.
I would try refactoring your code to use std::future instead of std::thread, furthermore there are a few issues which I believe you'll run into in the short term.
You shouldn't try to join while in the thread you're joining. That is, the code as you have it will never terminate. The lambda you've defined will attempt to call join, however, the lambda will never return since it's waiting on join which will only itself return when the lambda does so. In other words, you're telling it to wait on itself.
You're revealing too much information about the functionality of your class to the outside world. I would suggest moving implementation details into a .cc rather than putting it in the class declaration. Short of that, however, you're providing immediate access to your control variables spazieren and bedient. This is a problem because it complicates control flow and makes for weak abstraction.
Your bools are not atomic. If you attempt to modify them from outside the thread they're being read you'll run into crashes. And in some environments these crashes might be sporadic and very hard to debug.
Only sleeping when asked can be useful if you absolutely need to finish a task as soon as possible, but be aware that it's going to max out a core and if deployed to the wrong environment can cause major problems and slowdowns. I don't know what the end goal is for this program, but I would suggest considering changing the yield in the following code example to -some- period of time to sleep, 10 ms should be sufficient to prevent putting too much stress on your cpu.
Your threads status as to whether or not it's actively running is unclear with your implementation. I'd suggest considering an additional bool to indicate if it's running or not so you can more properly decide what to do if start() is called more than once.
When this object destructs it's going to crash if the thread is still running. You need to be sure to join before your destructor finishes running too.
I would consider the following refactorings:
#include <memory>
#include <future>
#include <atomic>
class fo
{
public:
~fo()
{
this->_bedient = true;
_workThread.wait();
}
void start()
{
_workThread = std::async(std::launch::async, [this]() -> bool
{
while(!this->_bedient)
{
if(true == this->_spazieren)
{
std::this_thread::sleep_for(std::chrono::seconds(10));
this->_spazieren = false;
}
else
{
std::this_thread::yield();
}
}
return true;
});
}
void ShouldSleep(bool shouldSleep)
{
this->_spazieren = shouldSleep;
}
void ShouldStop(bool shouldStop)
{
this->_bedient = !shouldStop;
}
private:
std::future<bool> _workThread = {};
std::atomic<bool> _bedient{ false };
std::atomic<bool> _spazieren{ false };
};
So my use case is the following: I have a handful of functions and fields defined inside a namespace. One of these functions will initialize the fields, then run another function inside a call to std::async. The function called should run indefinitely until flagged to stop, most commonly from outside of its own thread. The basic code looks something like this
namespace Game
{
void Create() {
Initialize();
auto returned = std::async(RunGameLoop, std::ref(FLAGS));
}
}
Now, the way I have attempted to implement this is by initializing these flags in the namespace:
namespace Game
{
namespace // This is used because the fields and functions here should be for internal use only
{
bool stopped = false;
}
}
Inside the RunGameLoop function the basic structure is set up like this
namespace Game
{
namespace // This is used because the fields and functions here should be for internal use only
{
void RunGameLoop()
{
while (!stopped)
{
// ... do stuff
}
}
}
}
But seemingly due to how async works, if I change the value of stopped from anywhere other than inside the RunGameLoop function, the RunGameLoop function does not see any change. Presumably when creating the async function call, C++ simply copies all values in scope at the time of construction, passing them by value instead of reference.
My question is: How do I make this change noticeable inside the async function loop? Or even better: Is there a better way to communicate simple global flags like this with an async function in C++? I have experimented with using std::ref, passing pointers, and passing an entire map of flags by reference, but seemingly no changes made outside of the RunGameLoop function will be noticeable inside the RunGameLoop function.
Edit: I've managed to replicate the issue in a minimal example, this program will run indefinitely, and indeed never reach the second std::cout statement, counterintuitively. The std::async call does, in fact, not seem to run the function asynchronously at all, which is a bit harsher than what I experienced in my own project. I acknowledge I might be misunderstanding how std::async is supposed to be used, but it seems like this code should work to me.
Edit 2: I bungled my prior example, so I fixed it. Unfortunately now it seems to behave as expected, unlike my actual project:
#include <iostream>
#include <future>
namespace test
{
namespace
{
std::atomic<bool> testbool = false;
std::future<void> returned;
void queryTestBool()
{
while (!testbool)
{
}
std::cout << "EXITED THREAD: " << std::boolalpha << testbool << std::endl;
}
}
void Initialize()
{
testbool = false;
}
void Delete()
{
testbool = !testbool;
returned.get();
}
void Create()
{
Initialize();
returned = std::async(queryTestBool);
}
}
int main()
{
using namespace test;
std::cout << std::boolalpha << testbool << std::endl;
Create();
Delete();
std::cout << std::boolalpha << testbool << std::endl;
}
This program outputs
false
EXITED THREAD: true
true
meaning that not only does the Delete function successfully change the value of testbool, but that change is noticed in the asynchronous while loop. This last part is what isn't happening in my own project for some reason, even when I use std::atomic. I will investigate further.
So I feel massively stupid. After struggling with this for weeks, implementing all sorts of stuff, I finally discovered that the place my test called the Delete() function was being skipped because of a failed assertion that I didn't expect to make the test exit before running the rest... Mystery solved
I'm facing a rather basic problem that I'd like to implement in C++(11 is Ok) using only standard libs.
Assume a function "Message()" which can be called any given number of times, after it's not called for a given time I want to trigger an action. For example:
Message();
Message();
Message();
Message();
sleep(10); <-- the idle time triggers an action
Message();
Hope this makes sense.
The way I implemented this so far is using a combination of an async while loop and a condition variable which I pulse every time a message comes in. Once the wait for the CV times out I take action.
In pseudo-code:
void Message(){
unique_lock ul(M);
new_message = false;
CV.notify();
}
...
future = std::async([]{
do {
unique_lock ul(M);
new_message = false;
got_message_within_time = CV.wait_for(ul, 20ms, new_message==true);
} while got_message_within_time;
// Got a timeout here...
time_to_take_action();
});
...
Message()
Message()
Message()
sleep(10)
I'm not convinced this is the most elegant solution out there, anyone has better suggestions? All in all the fundamental statement I want to implement is: "once I stop calling you, do something"
Any help/suggestion is welcome
Thanks !
I use a method similar to what you do, but I leave a monitoring thread running.
#include <chrono>
#include <iostream>
#include <thread>
using namespace std::chrono;
time_point<steady_clock> lastMessage(steady_clock::now());
bool shutdown(false);
void InactiveMessageLoop(){
while(!shutdown){
if((steady_clock::now() - lastMessage) > milliseconds(1000)){
std::cout << "Are you still there?" << std::endl;
}
std::this_thread::sleep_for(milliseconds(1000));
}
std::cout << "No hard feelings." << std::endl;
}
void Message(){
lastMessage = steady_clock::now();
// do stuff;
}
int main(){
std::thread MessageThread(InactiveMessageLoop);
Message();
Message();
std::this_thread::sleep_for(milliseconds(5000));
Message();
Message();
shutdown = true;
if(MessageThread.joinable()) MessageThread.join();
return 0;
}
Alternatively you could look into sending a signal, but it's probably just as effective to just interlace your code with a call to a message check function, since its considered bad practice to put business logic inside of a signal function.
Edit: I added the join at the end.
I have a program which spawns multiple threads, each of which executes a long-running task. The main thread then waits for all worker threads to join, collects results, and exits.
If an error occurs in one of the workers, I want the remaining workers to stop gracefully, so that the main thread can exit shortly afterwards.
My question is how best to do this, when the implementation of the long-running task is provided by a library whose code I cannot modify.
Here is a simple sketch of the system, with no error handling:
void threadFunc()
{
// Do long-running stuff
}
void mainFunc()
{
std::vector<std::thread> threads;
for (int i = 0; i < 3; ++i) {
threads.push_back(std::thread(&threadFunc));
}
for (auto &t : threads) {
t.join();
}
}
If the long-running function executes a loop and I have access to the code, then
execution can be aborted simply by checking a shared "keep on running" flag at the top of each iteration.
std::mutex mutex;
bool error;
void threadFunc()
{
try {
for (...) {
{
std::unique_lock<std::mutex> lock(mutex);
if (error) {
break;
}
}
}
} catch (std::exception &) {
std::unique_lock<std::mutex> lock(mutex);
error = true;
}
}
Now consider the case when the long-running operation is provided by a library:
std::mutex mutex;
bool error;
class Task
{
public:
// Blocks until completion, error, or stop() is called
void run();
void stop();
};
void threadFunc(Task &task)
{
try {
task.run();
} catch (std::exception &) {
std::unique_lock<std::mutex> lock(mutex);
error = true;
}
}
In this case, the main thread has to handle the error, and call stop() on
the still-running tasks. As such, it cannot simply wait for each worker to
join() as in the original implementation.
The approach I have used so far is to share the following structure between
the main thread and each worker:
struct SharedData
{
std::mutex mutex;
std::condition_variable condVar;
bool error;
int running;
}
When a worker completes successfully, it decrements the running count. If
an exception is caught, the worker sets the error flag. In both cases, it
then calls condVar.notify_one().
The main thread then waits on the condition variable, waking up if either
error is set or running reaches zero. On waking up, the main thread
calls stop() on all tasks if error has been set.
This approach works, but I feel there should be a cleaner solution using some
of the higher-level primitives in the standard concurrency library. Can
anyone suggest an improved implementation?
Here is the complete code for my current solution:
// main.cpp
#include <chrono>
#include <mutex>
#include <thread>
#include <vector>
#include "utils.h"
// Class which encapsulates long-running task, and provides a mechanism for aborting it
class Task
{
public:
Task(int tidx, bool fail)
: tidx(tidx)
, fail(fail)
, m_run(true)
{
}
void run()
{
static const int NUM_ITERATIONS = 10;
for (int iter = 0; iter < NUM_ITERATIONS; ++iter) {
{
std::unique_lock<std::mutex> lock(m_mutex);
if (!m_run) {
out() << "thread " << tidx << " aborting";
break;
}
}
out() << "thread " << tidx << " iter " << iter;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
if (fail) {
throw std::exception();
}
}
}
void stop()
{
std::unique_lock<std::mutex> lock(m_mutex);
m_run = false;
}
const int tidx;
const bool fail;
private:
std::mutex m_mutex;
bool m_run;
};
// Data shared between all threads
struct SharedData
{
std::mutex mutex;
std::condition_variable condVar;
bool error;
int running;
SharedData(int count)
: error(false)
, running(count)
{
}
};
void threadFunc(Task &task, SharedData &shared)
{
try {
out() << "thread " << task.tidx << " starting";
task.run(); // Blocks until task completes or is aborted by main thread
out() << "thread " << task.tidx << " ended";
} catch (std::exception &) {
out() << "thread " << task.tidx << " failed";
std::unique_lock<std::mutex> lock(shared.mutex);
shared.error = true;
}
{
std::unique_lock<std::mutex> lock(shared.mutex);
--shared.running;
}
shared.condVar.notify_one();
}
int main(int argc, char **argv)
{
static const int NUM_THREADS = 3;
std::vector<std::unique_ptr<Task>> tasks(NUM_THREADS);
std::vector<std::thread> threads(NUM_THREADS);
SharedData shared(NUM_THREADS);
for (int tidx = 0; tidx < NUM_THREADS; ++tidx) {
const bool fail = (tidx == 1);
tasks[tidx] = std::make_unique<Task>(tidx, fail);
threads[tidx] = std::thread(&threadFunc, std::ref(*tasks[tidx]), std::ref(shared));
}
{
std::unique_lock<std::mutex> lock(shared.mutex);
// Wake up when either all tasks have completed, or any one has failed
shared.condVar.wait(lock, [&shared](){
return shared.error || !shared.running;
});
if (shared.error) {
out() << "error occurred - terminating remaining tasks";
for (auto &t : tasks) {
t->stop();
}
}
}
for (int tidx = 0; tidx < NUM_THREADS; ++tidx) {
out() << "waiting for thread " << tidx << " to join";
threads[tidx].join();
out() << "thread " << tidx << " joined";
}
out() << "program complete";
return 0;
}
Some utility functions are defined here:
// utils.h
#include <iostream>
#include <mutex>
#include <thread>
#ifndef UTILS_H
#define UTILS_H
#if __cplusplus <= 201103L
// Backport std::make_unique from C++14
#include <memory>
namespace std {
template<typename T, typename ...Args>
std::unique_ptr<T> make_unique(
Args&& ...args)
{
return std::unique_ptr<T>(new T(std::forward<Args>(args)...));
}
} // namespace std
#endif // __cplusplus <= 201103L
// Thread-safe wrapper around std::cout
class ThreadSafeStdOut
{
public:
ThreadSafeStdOut()
: m_lock(m_mutex)
{
}
~ThreadSafeStdOut()
{
std::cout << std::endl;
}
template <typename T>
ThreadSafeStdOut &operator<<(const T &obj)
{
std::cout << obj;
return *this;
}
private:
static std::mutex m_mutex;
std::unique_lock<std::mutex> m_lock;
};
std::mutex ThreadSafeStdOut::m_mutex;
// Convenience function for performing thread-safe output
ThreadSafeStdOut out()
{
return ThreadSafeStdOut();
}
#endif // UTILS_H
I've been thinking about your situation for sometime and this maybe of some help to you. You could probably try doing a couple of different methods to achieve you goal. There are 2-3 options that maybe of use or a combination of all three. I will at minimum show the first option for I'm still learning and trying to master the concepts of Template Specializations as well as using Lambdas.
Using a Manager Class
Using Template Specialization Encapsulation
Using Lambdas.
Pseudo code of a Manager Class would look something like this:
class ThreadManager {
private:
std::unique_ptr<MainThread> mainThread_;
std::list<std::shared_ptr<WorkerThread> lWorkers_; // List to hold finished workers
std::queue<std::shared_ptr<WorkerThread> qWorkers_; // Queue to hold inactive and waiting threads.
std::map<unsigned, std::shared_ptr<WorkerThread> mThreadIds_; // Map to associate a WorkerThread with an ID value.
std::map<unsigned, bool> mFinishedThreads_; // A map to keep track of finished and unfinished threads.
bool threadError_; // Not needed if using exception handling
public:
explicit ThreadManager( const MainThread& main_thread );
void shutdownThread( const unsigned& threadId );
void shutdownAllThreads();
void addWorker( const WorkerThread& worker_thread );
bool isThreadDone( const unsigned& threadId );
void spawnMainThread() const; // Method to start main thread's work.
void spawnWorkerThread( unsigned threadId, bool& error );
bool getThreadError( unsigned& threadID ); // Returns True If Thread Encountered An Error and passes the ID of that thread,
};
Only for demonstration purposes did I use bool value to determine if a thread failed for simplicity of the structure, and of course this can be substituted to your like if you prefer to use exceptions or invalid unsigned values, etc.
Now to use a class of this sort would be something like this: Also note that a class of this type would be considered better if it was a Singleton type object since you wouldn't want more than 1 ManagerClass since you are working with shared pointers.
SomeClass::SomeClass( ... ) {
// This class could contain a private static smart pointer of this Manager Class
// Initialize the smart pointer giving it new memory for the Manager Class and by passing it a pointer of the Main Thread object
threadManager_ = new ThreadManager( main_thread ); // Wouldn't actually use raw pointers here unless if you had a need to, but just shown for simplicity
}
SomeClass::addThreads( ... ) {
for ( unsigned u = 1, u <= threadCount; u++ ) {
threadManager_->addWorker( some_worker_thread );
}
}
SomeClass::someFunctionThatSpawnsThreads( ... ) {
threadManager_->spawnMainThread();
bool error = false;
for ( unsigned u = 1; u <= threadCount; u++ ) {
threadManager_->spawnWorkerThread( u, error );
if ( error ) { // This Thread Failed To Start, Shutdown All Threads
threadManager->shutdownAllThreads();
}
}
// If all threads spawn successfully we can do a while loop here to listen if one fails.
unsigned threadId;
while ( threadManager_->getThreadError( threadId ) ) {
// If the function passed to this while loop returns true and we end up here, it will pass the id value of the failed thread.
// We can now go through a for loop and stop all active threads.
for ( unsigned u = threadID + 1; u <= threadCount; u++ ) {
threadManager_->shutdownThread( u );
}
// We have successfully shutdown all threads
break;
}
}
I like the design of manager class since I have used them in other projects, and they come in handy quite often especially when working with a code base that contains many and multiple resources such as a working Game Engine that has many assets such as Sprites, Textures, Audio Files, Maps, Game Items etc. Using a Manager Class helps to keep track and maintain all of the assets. This same concept can be applied to "Managing" Active, Inactive, Waiting Threads, and knows how to intuitively handle and shutdown all threads properly. I would recommend using an ExceptionHandler if your code base and libraries support exceptions as well as thread safe exception handling instead of passing and using bools for errors. Also having a Logger class is good to where it can write to a log file and or a console window to give an explicit message of what function the exception was thrown in and what caused the exception where a log message might look like this:
Exception Thrown: someFunctionNamedThis in ThisFile on Line# (x)
threadID 021342 failed to execute.
This way you can look at the log file and find out very quickly what thread is causing the exception, instead of using passed around bool variables.
The implementation of the long-running task is provided by a library whose code I cannot modify.
That means you have no way to synchronize the job done by working threads
If an error occurs in one of the workers,
Let's suppose that you can really detect worker errors; some of then can be easily detected if reported by the used library others cannot i.e.
the library code loops.
the library code prematurely exit with an uncaught exception.
I want the remaining workers to stop **gracefully**
That's just not possible
The best you can do is writing a thread manager checking on worker thread status and if an error condition is detected it just (ungracefully) "kills" all the worker threads and exits.
You should also consider detecting a looped working thread (by timeout) and offer to the user the option to kill or continue waiting for the process to finish.
Your problem is that the long running function is not your code, and you say you cannot modify it. Consequently you cannot make it pay any attention whatsoever to any kind of external synchronisation primitive (condition variables, semaphores, mutexes, pipes, etc), unless the library developer has done that for you.
Therefore your only option is to do something that wrestles control away from any code no matter what it's doing. This is what signals do. For that, you're going to have to use pthread_kill(), or whatever the equivalent is these days.
The pattern would be that
The thread that detects an error needs to communicate that error back to the main thread in some manner.
The main thread then needs to call pthread_kill() for all the other remaining threads. Don't be confused by the name - pthread_kill() is simply a way of delivering an arbitrary signal to a thread. Note that signals like STOP, CONTINUE and TERMINATE are process-wide even if raised with pthread_kill(), not thread specific so don't use those.
In each of those threads you'll need a signal handler. On delivery of the signal to a thread the execution path in that thread will jump to the handler no matter what the long running function was doing.
You are now back in (limited) control, and can (probably, well, maybe) do some limited cleanup and terminate the thread.
In the meantime the main thread will have been calling pthread_join() on all the threads it's signaled, and those will now return.
My thoughts:
This is a really ugly way of doing it (and signals / pthreads are notoriously difficult to get right and I'm no expert), but I don't really see what other choice you have.
It'll be a long way from looking 'graceful' in source code, though the end user experience will be OK.
You will be aborting execution part way through running that library function, so if there's any clean up it would normally do (e.g. freeing up memory it has allocated) that won't get done and you'll have a memory leak. Running under something like valgrind is a way of working out if this is happening.
The only way of getting the library function to clean up (if it needs it) will be for your signal handler to return control to the function and letting it run to completion, just what you don't want to do.
And of course, this won't work on Windows (no pthreads, at least none worth speaking of, though there may be an equivalent mechanism).
Really the best way is going to be to re-implement (if at all possible) that library function.
Does it ever make sense to check if this is null?
Say I have a class with a method; inside that method, I check this == NULL, and if it is, return an error code.
If this is null, then that means the object is deleted. Is the method even able to return anything?
Update: I forgot to mention that the method can be called from multiple threads and it may cause the object to be deleted while another thread is inside the method.
Does it ever make sense to check for this==null? I found this while doing a code review.
In standard C++, it does not, because any call on a null pointer is already undefined behavior, so any code relying on such checks is non-standard (there's no guarantee that the check will even be executed).
Note that this holds true for non-virtual functions as well.
Some implementations permit this==0, however, and consequently libraries written specifically for those implementations will sometimes use it as a hack. A good example of such a pair is VC++ and MFC - I don't recall the exact code, but I distinctly remember seeing if (this == NULL) checks in MFC source code somewhere.
It may also be there as a debugging aid, because at some point in the past this code was hit with this==0 because of a mistake in the caller, so a check was inserted to catch future instances of that. An assert would make more sense for such things, though.
If this == null then that means the object is deleted.
No, it doesn't mean that. It means that a method was called on a null pointer, or on a reference obtained from a null pointer (though obtaining such a reference is already U.B.). This has nothing to do with delete, and does not require any objects of this type to have ever existed.
Your note about threads is worrisome. I'm pretty sure you have a race condition that can lead to a crash. If a thread deletes an object and zeros the pointer, another thread could make a call through that pointer between those two operations, leading to this being non-null and also not valid, resulting in a crash. Similarly, if a thread calls a method while another thread is in the middle of creating the object, you may also get a crash.
Short answer, you really need to use a mutex or something to synchonize access to this variable. You need to ensure that this is never null or you're going to have problems.
I know that this is old but I feel like now that we're dealing with C++11-17 somebody should mention lambdas. If you capture this into a lambda that is going to be called asynchronously at a later point in time, it is possible that your "this" object gets destroyed before that lambda is invoked.
i.e passing it as a callback to some time-expensive function that is run from a separate thread or just asynchronously in general
EDIT: Just to be clear, the question was "Does it ever make sense to check if this is null" I am merely offering a scenario where it does make sense that might become more prevalent with the wider use of modern C++.
Contrived example:
This code is completely runable. To see unsafe behavior just comment out the call to safe behavior and uncomment the unsafe behavior call.
#include <memory>
#include <functional>
#include <iostream>
#include <future>
class SomeAPI
{
public:
SomeAPI() = default;
void DoWork(std::function<void(int)> cb)
{
DoAsync(cb);
}
private:
void DoAsync(std::function<void(int)> cb)
{
std::cout << "SomeAPI about to do async work\n";
m_future = std::async(std::launch::async, [](auto cb)
{
std::cout << "Async thread sleeping 10 seconds (Doing work).\n";
std::this_thread::sleep_for(std::chrono::seconds{ 10 });
// Do a bunch of work and set a status indicating success or failure.
// Assume 0 is success.
int status = 0;
std::cout << "Executing callback.\n";
cb(status);
std::cout << "Callback Executed.\n";
}, cb);
};
std::future<void> m_future;
};
class SomeOtherClass
{
public:
void SetSuccess(int success) { m_success = success; }
private:
bool m_success = false;
};
class SomeClass : public std::enable_shared_from_this<SomeClass>
{
public:
SomeClass(SomeAPI* api)
: m_api(api)
{
}
void DoWorkUnsafe()
{
std::cout << "DoWorkUnsafe about to pass callback to async executer.\n";
// Call DoWork on the API.
// DoWork takes some time.
// When DoWork is finished, it calls the callback that we sent in.
m_api->DoWork([this](int status)
{
// Undefined behavior
m_value = 17;
// Crash
m_data->SetSuccess(true);
ReportSuccess();
});
}
void DoWorkSafe()
{
// Create a weak point from a shared pointer to this.
std::weak_ptr<SomeClass> this_ = shared_from_this();
std::cout << "DoWorkSafe about to pass callback to async executer.\n";
// Capture the weak pointer.
m_api->DoWork([this_](int status)
{
// Test the weak pointer.
if (auto sp = this_.lock())
{
std::cout << "Async work finished.\n";
// If its good, then we are still alive and safe to execute on this.
sp->m_value = 17;
sp->m_data->SetSuccess(true);
sp->ReportSuccess();
}
});
}
private:
void ReportSuccess()
{
// Tell everyone who cares that a thing has succeeded.
};
SomeAPI* m_api;
std::shared_ptr<SomeOtherClass> m_data = std::shared_ptr<SomeOtherClass>();
int m_value;
};
int main()
{
std::shared_ptr<SomeAPI> api = std::make_shared<SomeAPI>();
std::shared_ptr<SomeClass> someClass = std::make_shared<SomeClass>(api.get());
someClass->DoWorkSafe();
// Comment out the above line and uncomment the below line
// to see the unsafe behavior.
//someClass->DoWorkUnsafe();
std::cout << "Deleting someClass\n";
someClass.reset();
std::cout << "Main thread sleeping for 20 seconds.\n";
std::this_thread::sleep_for(std::chrono::seconds{ 20 });
return 0;
}
FWIW, I have used debugging checks for (this != NULL) in assertions before which have helped catch defective code. Not that the code would have necessarily gotten too far with out a crash, but on small embedded systems that don't have memory protection, the assertions actually helped.
On systems with memory protection, the OS will generally hit an access violation if called with a NULL this pointer, so there's less value in asserting this != NULL. However, see Pavel's comment for why it's not necessarily worthless on even protected systems.
Your method will most likely (may vary between compilers) be able to run and also be able to return a value. As long as it does not access any instance variables. If it tries this it will crash.
As others pointed out you can not use this test to see if an object has been deleted. Even if you could, it would not work, because the object may be deleted by another thread just after the test but before you execute the next line after the test. Use Thread synchronization instead.
If this is null there is a bug in your program, most likely in the design of your program.
I'd also add that it's usually better to avoid null or NULL. I think the standard is changing yet again here but for now 0 is really what you want to check for to be absolutely sure you're getting what you want.
This is just a pointer passed as the first argument to a function (which is exactly what makes it a method). So long as you're not talking about virtual methods and/or virtual inheritance, then yes, you can find yourself executing an instance method, with a null instance. As others said, you almost certainly won't get very far with that execution before problems arise, but robust coding should probably check for that situation, with an assert. At least, it makes sense when you suspect it could be occuring for some reason, but need to track down exactly which class / call stack it's occurring in.
I know this is a old question, however I thought I will share my experience with use of Lambda capture
#include <iostream>
#include <memory>
using std::unique_ptr;
using std::make_unique;
using std::cout;
using std::endl;
class foo {
public:
foo(int no) : no_(no) {
}
template <typename Lambda>
void lambda_func(Lambda&& l) {
cout << "No is " << no_ << endl;
l();
}
private:
int no_;
};
int main() {
auto f = std::make_unique<foo>(10);
f->lambda_func([f = std::move(f)] () mutable {
cout << "lambda ==> " << endl;
cout << "lambda <== " << endl;
});
return 0;
}
This code segment faults
$ g++ -std=c++14 uniqueptr.cpp
$ ./a.out
Segmentation fault (core dumped)
If I remove the std::cout statement from lambda_func The code runs to completion.
It seems like, this statement f->lambda_func([f = std::move(f)] () mutable { processes lambda captures before member function is invoked.