Is there any potential problem in this code snippet?
#include <mutex>
#include <map>
#include <vector>
#include <thread>
constexpr int FOO_NUM = 5;
int main()
{
std::map<int, std::mutex> mp;
std::vector<std::thread> vec;
for(int i=0; i<2; i++)
{
vec.push_back(std::thread([&mp](){
std::lock_guard<std::mutex> lk(mp[FOO_NUM]); //I think there is some potential problem here, am I right?
//do something
}));
}
for(auto& thread:vec)
{
thread.join();
}
As per the document,which says that:
Inserts value_type(key, T()) if the key does not exist. This function is equivalent to return insert(std::make_pair(key, T())).first->second;
I think there is a potential problem in the aforementioned code snippet. You see this may happen:
1.the first thread created a mutex, and is locking the mutex.
2.the second thread created a new one, and the mutex created in the first thread needs to be destroyed while it's still used in the first thread.
Yes, there is a data race, but it is even more fundamental.
None of the containers in the C++ library are thread-safe, in any way. None of their operators are thread safe.
mp[FOO_NUM]
In the shown code multiple execution threads invoke map's [] operator. This is not thread-safe, the operator itself. What's contained in the map is immaterial.
the second thread created a new one, and the mutex created in the first
thread needs to be destroyed while it's still used in the first thread.
The only thing that destroys any mutex in the shown code is the map's destructor when the map itself gets destroyed when returning from main().
std::lock_guard<std::mutex> does not destroy its mutex, when the std::lock_guard gets destroyed and releases the mutex, of course. An execution thread's invocation of the map's [] operator may default-construct a new std::mutex, but there's nothing that would destroy it when the execution thread gets joined. A default-constructed value in a std::map, by its [] operator, gets destroyed only when something explicitly destroys it.
And it's the [] operator itself that's not thread safe, it has nothing to do with a mutex's construction or destruction.
operator[] of associative and unordered containers is not specified to be safe from data races if called without synchronization in multiple threads. See [container.requirements.dataraces]/1.
Therefore your code has a data race and consequently undefined behavior. Whether or not a new mutex is created doesn't matter either.
Related
I read a lot & I'm still unsure if I understood it or not (Im a woodworker).
Let's suppose that I have a function:
void class_test::example_1(int a, int b, char c)
{
//do stuff
int v;
int k;
char z;
if(condition)
{
std::thread thread_in_example (&class_test::example_1, & object, v ,k ,z);
th.detach();
}
}
Now if I call it:
std::thread example (&class_test::example_1, &object, a, b, c);
example.detach();
Question: What happen to thread_in_example when example complete & "detele" himself? is thread_in_example going to lost access to its parameters?
I thought that std::thread was making a copy of the elements unless they are given by &reference but on http://en.cppreference.com/w/cpp/thread/thread I can't really understand this part (du to my lack of knowledge in programming/english/computer science's semantics):
std::thread objects may also be in the state that does not represent any thread (after default construction, move from, detach, or join), and a thread of execution may be not associated with any thread objects (after detach).
and this one too:
No two std::thread objects may represent the same thread of execution;
std::thread is not CopyConstructible or CopyAssignable, although it is
MoveConstructible and MoveAssignable.
So I've doubts on how it really works.
From this std::thread::detach reference:
Separates the thread of execution from the thread object, allowing execution to continue independently. Any allocated resources will be freed once the thread exits.
[Emphasis mine]
Among those "allocated resources" will be the arguments, which means you can still safely use the arguments in the detached thread.
Unless you of course the arguments are references or pointers to objects that are destructed independently of the detached thread or the thread that created the detached thread.
I was reading differences between binary semaphore and mutex (Difference between binary semaphore and mutex) and one thing that i want to verify is that when a task locks (acquires) a mutex only it can unlock (release) it. If another task tries to unlock a mutex it hasn’t locked (thus doesn’t own) then an error condition is encountered and, most importantly, the mutex is not unlocked and for that i created below code in c++14:
#include <iostream>
#include <thread>
#include <mutex>
#include <chrono>
using namespace std;
int counter;
int i;
std::mutex g_pages_mutex;
void increment()
{
std::cout<<"increment...."<<std::endl;
g_pages_mutex.lock();
bool flag = g_pages_mutex.try_lock();
std::cout<<"increment Return value is "<<flag<<std::endl;
counter++;
std::this_thread::sleep_for(5s);
}
void increment1()
{
std::this_thread::sleep_for(5s);
std::cout<<"increment1...."<<std::endl;
g_pages_mutex.unlock();
counter++;
bool flag = g_pages_mutex.try_lock();
std::cout<<"increment1 Return value is "<<flag<<std::endl;
}
int main()
{
counter = 0;
std::thread t(increment);
std::thread t1(increment1);
t.join();
t1.join();
return 0;
}
However with this example I was able to unlock mutex from thread that doesn't own that , so just want is there some understanding gap or is this issue in c++14 std::mutex ?
Calling try_lock on a std::mutex (which is not recursive) owned by the calling thread, calling unlock on a mutex not owned by the calling thread, and ending a thread while holding a mutex, all result in undefined behavior.
It might appear to succeed, it might fail and throw an exception, it might format your hard drive, it might summon nasal demons, it might time travel and correct your code for you, or it might do something else. As far as the standard is concerned, anything is permissible.
The precondition for calling unlock is holding an ownership of the mutex, according to (std)30.4.1.2:
The expression m.unlock() shall be well-formed and have the following semantics:
Requires: The calling thread shall own the mutex.
Since thread executing increment1 does not hold the ownership of the mutex it detonated undefined behavior.
From cppreference std::mutex (emphasis mine):
The behavior of a program is undefined if a mutex is destroyed while still owned by any threads, or a thread terminates while owning a mutex.
From the same site on std::mutex::try_lock and as TC points out in their answer:
If try_lock is called by a thread that already owns the mutex, the behavior is undefined.
And also std::mutex::unlock and as TC points out in their answer:
The mutex must be locked by the current thread of execution, otherwise, the behavior is undefined.
Both your functions and threads cause undefined behaviour:
increment calls lock() and then try_lock(): undefined behaviour
increment1 calls unlock() before owning the mutex: undefined behaviour
Removing the try_lock() from increment will still result in undefined behaviour if you don't call unlock() before the thread ends.
You should prefer to use a std::lock_guard, or for a simple int you could also use std::atomic
For example consider
class ProcessList {
private
std::vector<std::shared_ptr<SomeObject>> list;
Mutex mutex;
public:
void Add(std::shared_ptr<SomeObject> o) {
Locker locker(&mutex); // Start of critical section. Locker release so will the mutex - In house stuff
list.push_back(std::make_shared<SomeObject>(o).
}
void Remove(std::shared_ptr<SomeObject> o) {
Locker locker(&mutex); // Start of critical section. Locker release so will the mutex - In house stuff
// Code to remove said object but indirectly modifying the reference count in copy below
}
void Process() {
std::vector<std::shared_ptr<SomeObject>> copy;
{
Locker locker(&mutes);
copy = std::vector<std::shared_ptr<SomeObject>>(
list.begin(), list.end()
)
}
for (auto it = copy.begin(), it != copy.end(); ++it) {
it->Procss(); // This may take time/add/remove to the list
}
}
};
One thread runs Process. Multiple threads run add/remove.
Will the reference count be safe and always correct - or should a mutex be placed around that?
Yes, the standard (at §20.8.2.2, at least as of N3997) that's intended to require that the reference counting be thread-safe.
For your simple cases like Add:
void Add(std::shared_ptr<SomeObject> o) {
Locker locker(&mutex);
list.push_back(std::make_shared<SomeObject>(o).
}
...the guarantees in the standard are strong enough that you shouldn't need the mutex, so you can just have:
void Add(std::shared_ptr<SomeObject> o) {
list.push_back(std::make_shared<SomeObject>(o).
}
For some of your operations, it's not at all clear that thread-safe reference counting necessarily obviates your mutex though. For example, inside of your Process, you have:
{
Locker locker(&mutes);
copy = std::vector<std::shared_ptr<SomeObject>>(
list.begin(), list.end()
)
}
This carries out the entire copy as an atomic operation--nothing else can modify the list during the copy. This assures that your copy gives you a snapshot of the list precisely as it was when the copy was started. If you eliminate the mutex, the reference counting will still work, but your copy might reflect changes made while the copy is being made.
In other words, the thread safety of the shared_ptr only assures that each individual increment or decrement is atomic--it doesn't assure that manipulations of the entire list are atomic as the mutex does in this case.
Since your list is actually a vector, you should be able to simplify the copying code a bit to just copy = list.
Also note that your Locker seems to be a subset of what std::lock_guard provides. It appears you could use:
std::lock_guard<std::mutex> locker(&mutes);
...in its place quite easily.
It will be an overhead to have a mutex for reference counting.
Internally, mutexes use atomic operations, basically a mutex does internal thread safe reference counting. So you can just use atomics for your reference counting directly instead of using a mutex and essentially doing double the work.
Unless your CPU architecture have an atomic increment/decrement and you used that for reference count, then, no, it's not safe; C++ makes no guarantee on the thread safety of x++/x-- operation on any of its standard types.
Use atomic<int> if your compiler supports them (C++11), otherwise you'll need to have the lock.
Further references:
https://www.threadingbuildingblocks.org/docs/help/tbb_userguide/Atomic_Operations.htm
Is unique_ptr thread safe? Is it impossible for the code below to print same number twice?
#include <memory>
#include <string>
#include <thread>
#include <cstdio>
using namespace std;
int main()
{
unique_ptr<int> work;
thread t1([&] {
while (true) {
const unique_ptr<int> localWork = move(work);
if (localWork)
printf("thread1: %d\n", *localWork);
this_thread::yield();
}
});
thread t2([&] {
while (true) {
const unique_ptr<int> localWork = move(work);
if (localWork)
printf("thread2: %d\n", *localWork);
this_thread::yield();
}
});
for (int i = 0; ; i++) {
work.reset(new int(i));
while (work)
this_thread::yield();
}
return 0;
}
unique_ptr is thread safe when used correctly. You broke the unwritten rule: Thou shalt never pass unique_ptr between threads by reference.
The philosophy behind unique_ptr is that it has a single (unique) owner at all times. Because of that, you can always pass it safely between threads without synchronization -- but you have to pass it by value, not by reference. Once you create aliases to a unique_ptr, you lose the uniqueness property and all bets are off. Unfortunately C++ can't guarantee uniqueness, so you are left with a convention that you have to follow religiously. Don't create aliases to a unique_ptr!
No, it isn't thread-safe.
Both threads can potentially move the work pointer with no explicit synchronization, so it's possible for both threads to get the same value, or both to get some invalid pointer ... it's undefined behaviour.
If you want to do something like this correctly, you probably need to use something like std::atomic_exchange so both threads can read/modify the shared work pointer with the right semantics.
According to Msdn:
The following thread safety rules apply to all classes in the Standard
C++ Library (except shared_ptr and iostream classes, as described
below).
A single object is thread safe for reading from multiple threads. For
example, given an object A, it is safe to read A from thread 1 and
from thread 2 simultaneously.
If a single object is being written to by one thread, then all reads
and writes to that object on the same or other threads must be
protected. For example, given an object A, if thread 1 is writing to
A, then thread 2 must be prevented from reading from or writing to A.
It is safe to read and write to one instance of a type even if another
thread is reading or writing to a different instance of the same type.
For example, given objects A and B of the same type, it is safe if A
is being written in thread 1 and B is being read in thread 2.
Let's say I have a container (std::vector) of pointers used by a multi-threaded application. When adding new pointers to the container, the code is protected using a critical section (boost::mutex). All well and good. The code should be able to return one of these pointers to a thread for processing, but another separate thread could choose to delete one of these pointers, which might still be in use. e.g.:
thread1()
{
foo* p = get_pointer();
...
p->do_something();
}
thread2()
{
foo* p = get_pointer();
...
delete p;
}
So thread2 could delete the pointer whilst thread1 is using it. Nasty.
So instead I want to use a container of Boost shared ptrs. IIRC these pointers will be reference counted, so as long as I return shared ptrs instead of raw pointers, removing one from the container WON'T actually free it until the last use of it goes out of scope. i.e.
std::vector<boost::shared_ptr<foo> > my_vec;
thread1()
{
boost::shared_ptr<foo> sp = get_ptr[0];
...
sp->do_something();
}
thread2()
{
boost::shared_ptr<foo> sp = get_ptr[0];
...
my_vec.erase(my_vec.begin());
}
boost::shared_ptr<foo> get_ptr(int index)
{
lock_my_vec();
return my_vec[index];
}
In the above example, if thread1 gets the pointer before thread2 calls erase, will the object pointed to still be valid? It won't actually be deleted when thread1 completes? Note that access to the global vector will be via a critical section.
I think this is how shared_ptrs work but I need to be sure.
For the threading safety of boost::shared_ptr you should check this link. It's not guarantied to be safe, but on many platforms it works. Modifying the std::vector is not safe AFAIK.
In the above example, if thread1 gets the pointer before thread2 calls erase, will the object pointed to still be valid? It won't actually be deleted when thread1 completes?
In your example, if thread1 gets the pointer before thread2, then thread2 will have to wait at the beginning of the function (because of the lock). So, yes, the object pointed to will still be valid. However, you might want to make sure that my_vec is not empty before accessing its first element.
If in addition, you synchronize the accesses to the vector (as in your original raw pointer proposal), your usage is safe. Otherwise, you may fall foul of example 4 in the link provided by the other respondent.