Does std::thread in another thread crash if its parameters are deleted? - c++

I read a lot & I'm still unsure if I understood it or not (Im a woodworker).
Let's suppose that I have a function:
void class_test::example_1(int a, int b, char c)
{
//do stuff
int v;
int k;
char z;
if(condition)
{
std::thread thread_in_example (&class_test::example_1, & object, v ,k ,z);
th.detach();
}
}
Now if I call it:
std::thread example (&class_test::example_1, &object, a, b, c);
example.detach();
Question: What happen to thread_in_example when example complete & "detele" himself? is thread_in_example going to lost access to its parameters?
I thought that std::thread was making a copy of the elements unless they are given by &reference but on http://en.cppreference.com/w/cpp/thread/thread I can't really understand this part (du to my lack of knowledge in programming/english/computer science's semantics):
std::thread objects may also be in the state that does not represent any thread (after default construction, move from, detach, or join), and a thread of execution may be not associated with any thread objects (after detach).
and this one too:
No two std::thread objects may represent the same thread of execution;
std::thread is not CopyConstructible or CopyAssignable, although it is
MoveConstructible and MoveAssignable.
So I've doubts on how it really works.

From this std::thread::detach reference:
Separates the thread of execution from the thread object, allowing execution to continue independently. Any allocated resources will be freed once the thread exits.
[Emphasis mine]
Among those "allocated resources" will be the arguments, which means you can still safely use the arguments in the detached thread.
Unless you of course the arguments are references or pointers to objects that are destructed independently of the detached thread or the thread that created the detached thread.

Related

Data race about map::operator[]

Is there any potential problem in this code snippet?
#include <mutex>
#include <map>
#include <vector>
#include <thread>
constexpr int FOO_NUM = 5;
int main()
{
std::map<int, std::mutex> mp;
std::vector<std::thread> vec;
for(int i=0; i<2; i++)
{
vec.push_back(std::thread([&mp](){
std::lock_guard<std::mutex> lk(mp[FOO_NUM]); //I think there is some potential problem here, am I right?
//do something
}));
}
for(auto& thread:vec)
{
thread.join();
}
As per the document,which says that:
Inserts value_type(key, T()) if the key does not exist. This function is equivalent to return insert(std::make_pair(key, T())).first->second;
I think there is a potential problem in the aforementioned code snippet. You see this may happen:
1.the first thread created a mutex, and is locking the mutex.
2.the second thread created a new one, and the mutex created in the first thread needs to be destroyed while it's still used in the first thread.
Yes, there is a data race, but it is even more fundamental.
None of the containers in the C++ library are thread-safe, in any way. None of their operators are thread safe.
mp[FOO_NUM]
In the shown code multiple execution threads invoke map's [] operator. This is not thread-safe, the operator itself. What's contained in the map is immaterial.
the second thread created a new one, and the mutex created in the first
thread needs to be destroyed while it's still used in the first thread.
The only thing that destroys any mutex in the shown code is the map's destructor when the map itself gets destroyed when returning from main().
std::lock_guard<std::mutex> does not destroy its mutex, when the std::lock_guard gets destroyed and releases the mutex, of course. An execution thread's invocation of the map's [] operator may default-construct a new std::mutex, but there's nothing that would destroy it when the execution thread gets joined. A default-constructed value in a std::map, by its [] operator, gets destroyed only when something explicitly destroys it.
And it's the [] operator itself that's not thread safe, it has nothing to do with a mutex's construction or destruction.
operator[] of associative and unordered containers is not specified to be safe from data races if called without synchronization in multiple threads. See [container.requirements.dataraces]/1.
Therefore your code has a data race and consequently undefined behavior. Whether or not a new mutex is created doesn't matter either.

Thread Safety of Shared Pointers' Control Block

I am working on a small program that utilizes shared pointers. I have a simple class "Thing", that just is an class with an integer attribute:
class Thing{
public:
Thing(int m){x=m;}
int operator()(){
return x;
}
void set_val(int v){
x=v;
}
int x;
~Thing(){
std::cout<<"Deleted thing with value "<<x<<std::endl;
}
};
I have a simple function "fun", that takes in a shared_ptr instance, and an integer value, index, which just keeps track of which thread is outputting a given value. The function prints out the index value, passed to the function, and the reference count of the shared pointer that was passed in as an argument
std::mutex mtx1;
void fun(std::shared_ptr<Thing> t1,int index){
std::lock_guard <std::mutex> loc(mtx1);
int m=t1.use_count();
std::cout<<index<<" : "<<m<<std::endl;
}
In main,I create one instance of a shared pointer which is a wrapper around a Thing object like so:
std::shared_ptr<Thing> ptr5(nullptr);
ptr5=std::make_shared<Thing>(110);
(declared this way for exception safety).
I then create 3 threads, each of which creates a thread executing the fun() function that takes in as arguments the ptr5 shared pointer and increasing index values:
std::thread t1(fun,ptr5,1),t2(fun,ptr5,2),t3(fun,ptr5,3);
t1.join();
t2.join();
t3.join();
My thought process here is that since each of the shared pointer control block member functions was thread safe, the call to use_count() within the fun() function is not an atomic instruction and therefore requires no locking. However, both running without and without a lock_guard,this still leads to a race condition. I expect to see an output of:
1:2
2:3
3:4
since each thread spawns a new reference to the original shared pointer, the use_count() will be incremented each time by 1 reference. However, my output is still random due to some race condition.
In a multithreaded environment, use_count() is approximate. From cppreference :
In multithreaded environment, the value returned by use_count is approximate (typical implementations use a memory_order_relaxed load)
The control block for shared_ptr is otherwise thread-safe :
All member functions (including copy constructor and copy assignment) can be called by multiple threads on different instances of shared_ptr without additional synchronization even if these instances are copies and share ownership of the same object.
Seeing an out-of-date use_count() is not an indication that the control-block was corrupted by a race condition.
Note that this does not extend to modifying the pointed object. It does not provide synchronization for the pointed object. Only the state of the shared_ptr and the control block are protected.

Order of std::mutex locking

I’ve rarely thought about what happens between two consecutive expressions, between the call to a function and the execution of its body's first expression, or between a call to a constructor and the execution of its initializer. Then I started reading about concurrency...
1.) In two consecutive calls to std::thread’s constructor with the same callable (e.g. function, functor, lambda), whose body begins with a std::lock_guard initialization with the same std::mutex object, does the standard guarantee the thread corresponding to the first thread constructor call executes the lock-protected code first?
2.) If the standard doesn’t make the guarantee, then is there any theoretical or practical possibility the thread corresponding to the second thread constructor call executes the protected code first? (e.g. heavy system load during the execution of the initializer or body of the first thread constructor call)
Here’s a global std::mutex object m and a global unsigned num initialized to 1. There is nothing but whitespace between function foo’s body’s opening brace { and the std::lock_guard. In main, there are two std::threads t1 and t2. t1 calls the thread constructor first. t2 calls the thread constructor second. Each thread is constructed with a pointer to foo. t1 calls foo with unsigned argument 1. t2 calls foo with unsigned argument 2. Depending on which thread locks the mutex first, num’s value will be either a 4 or a 3 after both threads have executed the lock-protected code. num will equal 4 if t1 beats t2 to the lock. Otherwise, num will equal 3. I ran 100,000 trials of this by looping and resetting num to 1 at the end of each loop. (As far as I know, the results don’t and shouldn’t depend on which thread is join()ed first.)
#include <thread>
#include <mutex>
#include <iostream>
std::mutex m;
unsigned short num = 1;
void foo(unsigned short par) {
std::lock_guard<std::mutex> guard(m);
if (1 == num)
num += par;
else
num *= par;
}
int main() {
unsigned count = 0;
for (unsigned i = 0; i < 100000; ++i) {
std::thread t1(foo, 1);
std::thread t2(foo, 2);
t1.join();
t2.join();
if (4 == num) {
++count;
}
num = 1;
}
std::cout << count << std::endl;
}
In the end, count equals 100000, so it turns out t1 wins the race every time. But these trials don’t prove anything.
3.) Does the standard mandate “first to call thread constructor” always implies “first to call the callable passed to the thread constructor”?
4.) Does the standard mandate “first to call the callable passed to the thread constructor” always implies “first to lock the mutex”; provided that within the callable’s body, there exists no code dependent upon the parameter(s) passed to the callable prior to the line with the std::lock_guard initialization? (Also rule out any callable’s local static variable, like a counter of number of times called, which can be used to intentionally delay certain calls.)
No, the standard doesn't guarantee that the first thread gets the lock first. Basically, if you need to impose and ordering between threads, you'll need to synchronize between these threads. Even if the first thread gets to call the mutex lock function first, the second thread may acquire the lock first.
Absolutely. For example, there may be just one core available to your application at the time the threads are spawned and if the spawning thread decides after the second thread is spawned to wait on something, the schedule may decide to process the latest thread seen which is the second thread. Even if there are many cores available there are plenty of reasons the second thread is faster.
No, why would it! The first step is to spawn a thread and carry on. By the time the first function object is called the second thread can be running and call its function object.
No. There are not ordering guarantees between threads unless you explicitly impose them yourself as they would defeat the purpose of concurrency.

C++11 std::thread::detach and access to shared data

If you have shared variables between a std::thread and the main thread (or any other thread for that matter), can you still access those shared variables even if you execute the thread::detach() method immediately after creating the thread?
Yes! Global, captured and passed-in variables are still accessible after calling detach().
However, if you are calling detach, it is likely that you want to return from the function that created the thread, allowing the thread object to go out of scope. If that is the case, you will have to take care that none of the locals of that function were passed to the thread either by reference or through a pointer.
You can think of detach() as a declaration that the thread does not need anything local to the creating thread.
In the following example, a thread keeps accessing an int on the stack of the starting thread after it has gone out of scope. This is undefined behaviour!
void start_thread()
{
int someInt = 5;
std::thread t([&]() {
while (true)
{
// Will print someInt (5) repeatedly until we return. Then,
// undefined behavior!
std::cout << someInt << std::endl;
}
});
t.detach();
}
Here are some possible ways to keep the rug from being swept out from under your thread:
Declare the int somewhere that will not go out of scope during the lifetime of any threads that need it (perhaps a global).
Declare shared data as a std::shared_ptr and pass that by value into the thread.
Pass by value (performing a copy) into the thread.
Pass by rvalue reference (performing a move) into the thread.
Yes. Detaching a thread just means that it cleans up after itself when it is finished and you no longer need to, nor are you allowed to, join it.

Is unique_ptr thread safe?

Is unique_ptr thread safe? Is it impossible for the code below to print same number twice?
#include <memory>
#include <string>
#include <thread>
#include <cstdio>
using namespace std;
int main()
{
unique_ptr<int> work;
thread t1([&] {
while (true) {
const unique_ptr<int> localWork = move(work);
if (localWork)
printf("thread1: %d\n", *localWork);
this_thread::yield();
}
});
thread t2([&] {
while (true) {
const unique_ptr<int> localWork = move(work);
if (localWork)
printf("thread2: %d\n", *localWork);
this_thread::yield();
}
});
for (int i = 0; ; i++) {
work.reset(new int(i));
while (work)
this_thread::yield();
}
return 0;
}
unique_ptr is thread safe when used correctly. You broke the unwritten rule: Thou shalt never pass unique_ptr between threads by reference.
The philosophy behind unique_ptr is that it has a single (unique) owner at all times. Because of that, you can always pass it safely between threads without synchronization -- but you have to pass it by value, not by reference. Once you create aliases to a unique_ptr, you lose the uniqueness property and all bets are off. Unfortunately C++ can't guarantee uniqueness, so you are left with a convention that you have to follow religiously. Don't create aliases to a unique_ptr!
No, it isn't thread-safe.
Both threads can potentially move the work pointer with no explicit synchronization, so it's possible for both threads to get the same value, or both to get some invalid pointer ... it's undefined behaviour.
If you want to do something like this correctly, you probably need to use something like std::atomic_exchange so both threads can read/modify the shared work pointer with the right semantics.
According to Msdn:
The following thread safety rules apply to all classes in the Standard
C++ Library (except shared_ptr and iostream classes, as described
below).
A single object is thread safe for reading from multiple threads. For
example, given an object A, it is safe to read A from thread 1 and
from thread 2 simultaneously.
If a single object is being written to by one thread, then all reads
and writes to that object on the same or other threads must be
protected. For example, given an object A, if thread 1 is writing to
A, then thread 2 must be prevented from reading from or writing to A.
It is safe to read and write to one instance of a type even if another
thread is reading or writing to a different instance of the same type.
For example, given objects A and B of the same type, it is safe if A
is being written in thread 1 and B is being read in thread 2.