Is a future safe to pass to a detached thread? - c++

Is passing a std::future to a detached instance of std::thread a safe operation? I know that underneath, the std::future has state in a shared_ptr which it shares with a std::promise. Here is an example.
int main()
{
std::promise<void> p;
std::thread( [f = p.get_future()]() {
if ( f.wait_for( std::chrono::seconds( 2 ) ) == std::future_status::ready )
{
return;
}
std::terminate();
} ).detach();
// wait for some operation
p.set_value();
}
There is a potential error case in the above code where the lambda is executed after the main thread exits. Does the shared state remain after the main thread exits?

[basic.start.term]/6 If there is a use of a standard library object or function not permitted within signal handlers (21.10) that does not happen before (4.7) completion of destruction of objects with static storage duration and execution of std::atexit registered functions (21.5), the program has undefined behavior.
Per [basic.start.main]/5, returning from main has the effect of calling std::exit, which does destroy objects with static storage duration and execute std::atexit registered functions. Therefore, I believe your example exhibits undefined behavior.

According to cppreference:
In a typical implementation, std::shared_ptr holds only two pointers:
- the stored pointer (one returned by get());
- a pointer to control block.
The control block is a dynamically-allocated object...
Given this information, I wouldn't think that the termination of the main thread is going to affect the shared_ptr in the worker thread.

Related

Thread Safety of Shared Pointers' Control Block

I am working on a small program that utilizes shared pointers. I have a simple class "Thing", that just is an class with an integer attribute:
class Thing{
public:
Thing(int m){x=m;}
int operator()(){
return x;
}
void set_val(int v){
x=v;
}
int x;
~Thing(){
std::cout<<"Deleted thing with value "<<x<<std::endl;
}
};
I have a simple function "fun", that takes in a shared_ptr instance, and an integer value, index, which just keeps track of which thread is outputting a given value. The function prints out the index value, passed to the function, and the reference count of the shared pointer that was passed in as an argument
std::mutex mtx1;
void fun(std::shared_ptr<Thing> t1,int index){
std::lock_guard <std::mutex> loc(mtx1);
int m=t1.use_count();
std::cout<<index<<" : "<<m<<std::endl;
}
In main,I create one instance of a shared pointer which is a wrapper around a Thing object like so:
std::shared_ptr<Thing> ptr5(nullptr);
ptr5=std::make_shared<Thing>(110);
(declared this way for exception safety).
I then create 3 threads, each of which creates a thread executing the fun() function that takes in as arguments the ptr5 shared pointer and increasing index values:
std::thread t1(fun,ptr5,1),t2(fun,ptr5,2),t3(fun,ptr5,3);
t1.join();
t2.join();
t3.join();
My thought process here is that since each of the shared pointer control block member functions was thread safe, the call to use_count() within the fun() function is not an atomic instruction and therefore requires no locking. However, both running without and without a lock_guard,this still leads to a race condition. I expect to see an output of:
1:2
2:3
3:4
since each thread spawns a new reference to the original shared pointer, the use_count() will be incremented each time by 1 reference. However, my output is still random due to some race condition.
In a multithreaded environment, use_count() is approximate. From cppreference :
In multithreaded environment, the value returned by use_count is approximate (typical implementations use a memory_order_relaxed load)
The control block for shared_ptr is otherwise thread-safe :
All member functions (including copy constructor and copy assignment) can be called by multiple threads on different instances of shared_ptr without additional synchronization even if these instances are copies and share ownership of the same object.
Seeing an out-of-date use_count() is not an indication that the control-block was corrupted by a race condition.
Note that this does not extend to modifying the pointed object. It does not provide synchronization for the pointed object. Only the state of the shared_ptr and the control block are protected.

deallocation order of the static member and detached thread itself

There is a global function that initializes and cleans up some resources.
But those 2 function must be called when no other thread is running.(like curl_global_init)
Initializer must be manually called, but I don't want the cleaner to be called manually.
So I do the following.
class GLOBAL_WRAPPER {
public:
static void init() {
getInstance();
}
static GLOBAL_WRAPPER& getInstance() {
static GLOBAL_WRAPPER inst;
return inst;
}
private:
GLOBAL_WRAPPER() {
// do global init call
}
~GLOBAL_WRAPPER() {
// do global cleanup call
}
};
void GLOBAL_INIT() {
GLOBAL_WRAPPER::init();
}
int main(){
GLOBAL_INIT();
// do whatever you want
// std::thread([](){for(;;);}).detach(); oops!
}
But in such bad case like: creating detached thread and not terminating it before main ends, when is deallocation of static variables(GLOBAL_WRAPPER in this case) called?
detached thread is terminated and static variable is freed
static variable is freed and detached thread is terminated
implementation defined.
I'm just interested in the thread itself, not thread storage duration objects.
The C++ standard specifies (in so many words) that returning from main is equivalent to calling std::exit.
main function [basic.start.main]
A return statement in main has the effect of leaving the main function
(destroying any objects with automatic storage duration) and calling
std::exit with the return value as the argument. If control flows off
the end of the compound-statement of main, the effect is equivalent to
a return with operand 0.
exit is defined as follows (irrelevant details omitted):
Startup and termination [support.start.term]
[[noreturn]] void exit(int status);
Effects:
— First, objects with thread storage duration and associated
with the current thread are destroyed. Next, objects with static
storage duration are destroyed and functions registered by calling
atexit are called.
— Next, all open C streams ... are removed.
— Finally, control is returned to the host environment. If status is
zero or EXIT_SUCCESS, an implementation-defined form of the status
successful termination is returned. If status is EXIT_- FAILURE, an
implementation-defined form of the status unsuccessful termination is
returned.
It is not defined what happens if non-main execution threads are running when the main execution thread returns. The shown code does not guarantee that the non-main execution thread terminates/sequenced before main returns.
As such, the only thing the standard specifies is that:
Global objects get destroyed
"Control is returned to the host environment", a.k.a.: "It's dead, Jim".
The standard does not define what happens if non-main execution threads are running when the main execution thread returns. A.k.a.: undefined behavior.

Does std::thread in another thread crash if its parameters are deleted?

I read a lot & I'm still unsure if I understood it or not (Im a woodworker).
Let's suppose that I have a function:
void class_test::example_1(int a, int b, char c)
{
//do stuff
int v;
int k;
char z;
if(condition)
{
std::thread thread_in_example (&class_test::example_1, & object, v ,k ,z);
th.detach();
}
}
Now if I call it:
std::thread example (&class_test::example_1, &object, a, b, c);
example.detach();
Question: What happen to thread_in_example when example complete & "detele" himself? is thread_in_example going to lost access to its parameters?
I thought that std::thread was making a copy of the elements unless they are given by &reference but on http://en.cppreference.com/w/cpp/thread/thread I can't really understand this part (du to my lack of knowledge in programming/english/computer science's semantics):
std::thread objects may also be in the state that does not represent any thread (after default construction, move from, detach, or join), and a thread of execution may be not associated with any thread objects (after detach).
and this one too:
No two std::thread objects may represent the same thread of execution;
std::thread is not CopyConstructible or CopyAssignable, although it is
MoveConstructible and MoveAssignable.
So I've doubts on how it really works.
From this std::thread::detach reference:
Separates the thread of execution from the thread object, allowing execution to continue independently. Any allocated resources will be freed once the thread exits.
[Emphasis mine]
Among those "allocated resources" will be the arguments, which means you can still safely use the arguments in the detached thread.
Unless you of course the arguments are references or pointers to objects that are destructed independently of the detached thread or the thread that created the detached thread.

C++11 std::thread::detach and access to shared data

If you have shared variables between a std::thread and the main thread (or any other thread for that matter), can you still access those shared variables even if you execute the thread::detach() method immediately after creating the thread?
Yes! Global, captured and passed-in variables are still accessible after calling detach().
However, if you are calling detach, it is likely that you want to return from the function that created the thread, allowing the thread object to go out of scope. If that is the case, you will have to take care that none of the locals of that function were passed to the thread either by reference or through a pointer.
You can think of detach() as a declaration that the thread does not need anything local to the creating thread.
In the following example, a thread keeps accessing an int on the stack of the starting thread after it has gone out of scope. This is undefined behaviour!
void start_thread()
{
int someInt = 5;
std::thread t([&]() {
while (true)
{
// Will print someInt (5) repeatedly until we return. Then,
// undefined behavior!
std::cout << someInt << std::endl;
}
});
t.detach();
}
Here are some possible ways to keep the rug from being swept out from under your thread:
Declare the int somewhere that will not go out of scope during the lifetime of any threads that need it (perhaps a global).
Declare shared data as a std::shared_ptr and pass that by value into the thread.
Pass by value (performing a copy) into the thread.
Pass by rvalue reference (performing a move) into the thread.
Yes. Detaching a thread just means that it cleans up after itself when it is finished and you no longer need to, nor are you allowed to, join it.

How do I make sure there is only 1 mutex?

I am running some thread safe code here. I am using a mutex to protect the section of code that needs to be run only by only 1 thread at a time. The problem I have is using this code sometimes I end up with 2 Mutex objects. This is a static function by the way. How do I make sure only 1 mutex object gets created??
/*static*/ MyClass::GetResource()
{
if (m_mutex == 0)
{
// make a new mutex object
m_mutex = new MyMutex();
}
m_mutex->Lock();
Simply create m_mutex outside of GetResource(), before it can ever be called - this removes the critical section around the actual creation of the mutex.
MyClass::Init()
{
m_mutex = new Mutex;
}
MyClass::GetResource()
{
m_mutex->Lock();
...
m_mutex->Unlock();
}
The issue is the thread could be interrupted after checking if m_mutex is 0, but not before it creates the mutex, allowing another thread to run through the same code.
Don't assign to m_mutex right away. Create a new mutex, and then do an atomic compare exchange.
You don't mention your target platform, but on Windows:
MyClass::GetResource()
{
if (m_mutex == 0)
{
// make a new mutex object
MyMutex* mutex = new MyMutex();
// Only set if mutex is still NULL.
if (InterlockedCompareExchangePointer(&m_mutex, mutex, 0) != 0)
{
// someone else beat us to it.
delete mutex;
}
}
m_mutex->Lock();
Otherwise, replace with whatever compare/swap function your platform provides.
Another option is to use one-time initialization support, which is available on Windows Vista and up, or just pre-create the mutex if you can.
Lazy mutex initialization isn't really appropriate for static methods; you need some guarantee that nobody races to the initialization. The following uses the compiler to generate a single static mutex for the class.
/* Header (.hxx) */
class MyClass
{
...
private:
static mutable MyMutex m_mutex; // Declares, "this mutex exists, somewhere."
};
/* Compilation Unit (.cxx) */
MyMutex MyClass::m_mutex; // The aforementioned, "somewhere."
MyClass::GetResource()
{
m_mutex.Lock();
...
m_mutex.Unlock();
}
Some other solutions will require extra assumptions of your fellow programmers. With the "call init()" method, for instance, you have to be sure that the initialization method were called, and everybody would have to know this rule.
Why use a pointer anyway? Why not replace the pointer with an actual instance that does not require dynamic memory management? This avoids the race condition, and does not impose a performance hit on every call into the function.
Since it is only to protect one specific section of code, simply declare it static inside the function.
static MyClass::GetResource()
{
static MyMutex mutex;
mutex.Lock();
// ...
mutex.Unlock();
The variable is a local variable with static storage duration. It is explicitly stated in the Standard:
An implementation is permitted to perform early initialization of other block-scope variables with static or
thread storage duration under the same conditions that an implementation is permitted to statically initialize
a variable with static or thread storage duration in namespace scope (3.6.2). Otherwise such a variable is
initialized the first time control passes through its declaration; such a variable is considered initialized upon
the completion of its initialization. If the initialization exits by throwing an exception, the initialization
is not complete, so it will be tried again the next time control enters the declaration. If control enters
the declaration concurrently while the variable is being initialized, the concurrent execution shall wait for
completion of the initialization.
The last sentence is of particular interest to you.