Pushing to a vector and iterating on it from different threads - c++

Is such a piece of code safe?
vector<int> v;
void thread_1()
{
v.push_back(100);
}
void thread_2()
{
for (int i : v)
{
// some actions
}
}
Does for (int i : v) compile into something like:
for ( ; __begin != __end; ++__begin)
{
int i = *__begin;
}
Is it possible that push_back in first thread will make data reallocation (when size == capacity), remove old data and *__begin in another thread will dereference freed iterator? And a runtime crash will occur
If so, how should I synchronize the threads? Something like:
void thread_1()
{
mtx.lock();
v.push_back(100);
mtx.unlock();
}
void thread_2()
{
mtx.lock();
for (int i : v)
{
// some actions
}
mtx.unlock();
}
?

Simply put, On some architectures, Fundamental types are inherently atomic, while on others they are not.
On those architectures, writing and reading from and to vector<int> v is thread safe as long as no reallocation occurs and ints are properly aligned; but it depends on various factors.
BUT:
you may want to avoid writing architecture-specific code (unless you want your code to basically run only on your own computer)
You have no mechanism in your code to prevent reallocation (which may invalidate iterators held by other threads), and since you also have no mechanism to synchronize the threads in your code, such reallocations can easily occur.
Considering your design, if you have a std::vector of classes/structs instead of Fundamental Types, you will also risk race conditions and/or UB even in simple concurrent read/writes since one thread can see the vector's element in a broken state (i.e. thread2 can see element#x in vector while it is being changed{push_back'd} by thread1)
in order to ensure thread safety, you have many options:
Prevent modifications to the queue entirely while its being read/written to - basically what you are doing in you own solution- using a global mutual exclusion mechanism
Prevent modifications by other threads for the element currently being manipulated using fine-grained mutual exclusion (could get tricky for linked data structures)
Use thread-safe data structure which have built-in mechanisms to ensure that a single element cannot be accessed by multiple threads
...

Related

How to individually lock unordered_map elements in C++

I have an unordered_map that I want to be accessible by multiple threads but locking the whole thing with a mutex would be too slow.
To get around this I put a mutex in each element of the unordered_map:
class exampleClass{
std::mutex m;
int data;
};
std::unordered_map<int,exampleClass> exampleMap;
The issue is I'm unable to safely erase elements, because in order to destroy a mutex it must be unlocked but if it's unlocked then another thread could lock it and be writing to or reading the element during destruction.
unordered_map is not suitable for fine-grained parallelism. It is not legal
to add or remove elements without ensuring mutual exclusion during the process.
I would suggest using something like tbb::concurrent_hash_map instead, which will result in less lock contention than locking the map as a whole. (There are other concurrent hash table implementations out there; the advantage of TBB is that it's well-supported and stable.)
#Sneftel's answer is good enough.
But if you insist on using std::unordered_map, I suggest you two use one mutex to protect the insertion/deletion of the map, and another mutex for each element for modifying the element.
class exampleClass{
std::mutex m;
int data;
};
std::unordered_map<int,exampleClass> exampleMap;
std::mutex mapLock;
void add(int key, int value) {
std::unique_lock<std::mutex> _(mapLock);
exampleMap.insert({key, value});
}
void delete(int key) {
std::unique_lock<std::mutex> _(mapLock);
auto it = exampleMap.find(key);
if (it != exampleMap.end()) {
std::unique_lock<std::mutex> _1(it->m);
exampleMap.erase(it);
}
}
These should perform better for a big lock on the whole map if delete is not a frequent operation.
But be careful of these kinds of code, because it is hard to reason and to get right.
I strongly recommend #Sneftel's answer.
You have the following options:
Lock the entire mutex
Use a container of shared_ptr so the actual class can be modified (with or without a mutex) unrelated to the container.

is it ok to access value(entry in thread safe map) pointed by pointer inside non-thread safe container?

For example,
// I am using thread safe map from
// code.google.com/p/thread-safe-stl-containers
#include <thread_safe_map.h>
class B{
vector<int> b1;
};
//Thread safe map
thread_safe::map<int, B> A;
B b_object;
A[1] = b_object;
// Non thread safe map.
map<int, B*> C;
C[1] = &A[1].second;
So are following operations still thread safe?
Thread1:
for(int i=0; i<10000; i++) {
cout << C[1]->b1[i];
}
Thread2:
for(int i=0; i<10000; i++) {
C[1]->b1.push_back(i);
}
Is there any problem in the above code? If so how can I fix it?
Is it OK to access value(entry in thread safe map) pointed by pointer inside non-thread safe container?
No, what you are doing there is not safe. The way your thread_safe_map is implemented is to take a lock for the duration of every function call:
//Element Access
T & operator[]( const Key & x ) { boost::lock_guard<boost::mutex> lock( mutex ); return storage[x]; }
The lock is released as soon as the access function ends which means that any modification you make through the returned reference has no protection.
As well as being not entirely safe this method is very slow.
A safe(er), efficient, but highly experimental way to lock containers is proposed here: https://github.com/isocpp/CppCoreGuidelines/issues/924
with source code here https://github.com/galik/GSL/blob/lockable-objects/include/gsl/gsl_lockable (shameless self promotion disclaimer).
In general, STL containers can be accessed from multiple threads as long as all threads either:
read from the same container
modify elements in a thread safe manner
You cannot push_back (or erase, insert, etc.) from one thread and read from another thread. Suppose that you are trying to access an element in thread 1 while push_back in thread 2 is in the middle of reallocation of vector's storage. This might crash the application, might return garbage (or might work, if you're lucky).
The second bullet point applies to situations like this:
std::vector<std::atomic_int> elements;
// Thread 1:
elements[10].store(5);
// Thread 2:
int v = elements[10].load();
In this case, you're concurrently reading and writing an atomic variable, but the vector itself is not modified - only its element is.
Edit: using thread_safe::map doesn't change anything in you're case. While the modifying the map is ok, modifying its elements is not. Putting std::vector in a thread-safe collection doesn't automagically make it thread-safe too.

Strange behavior of java.util.ArrayList and java.util.LinkedList when removing elements in for-each loop [duplicate]

ConcurrentModificationException : This exception may be thrown by methods that have detected concurrent modification of an object when such modification is not permissible.
Above is ConcurrentModificationException definition from javadoc.
So I try to test below code:
final List<String> tickets = new ArrayList<String>(100000);
for (int i = 0; i < 100000; i++) {
tickets.add("ticket NO," + i);
}
for (int i = 0; i < 10; i++) {
Thread salethread = new Thread() {
public void run() {
while (tickets.size() > 0) {
tickets.remove(0);
System.out.println(Thread.currentThread().getId()+"Remove 0");
}
}
};
salethread.start();
}
The code is simple.
10 threads remove the element from the arraylist object.
It is sure that multiple threads access one object. But it runs OK. No exception is thrown.
Why?
I'm quoting a large section of the ArrayList Javadoc for your benefit. Relevant portions that explain the behavior you are seeing are highlighted.
Note that this implementation is not synchronized. If multiple threads
access an ArrayList instance concurrently, and at least one of the
threads modifies the list structurally, it must be synchronized
externally. (A structural modification is any operation that adds or
deletes one or more elements, or explicitly resizes the backing array;
merely setting the value of an element is not a structural
modification.) This is typically accomplished by synchronizing on some
object that naturally encapsulates the list. If no such object exists,
the list should be "wrapped" using the Collections.synchronizedList
method. This is best done at creation time, to prevent accidental
unsynchronized access to the list:
List list = Collections.synchronizedList(new ArrayList(...));
The iterators returned by this class's iterator and listIterator methods
are fail-fast: if the list is structurally modified at any time after
the iterator is created, in any way except through the iterator's own
remove or add methods, the iterator will throw a
ConcurrentModificationException. Thus, in the face of concurrent
modification, the iterator fails quickly and cleanly, rather than
risking arbitrary, non-deterministic behavior at an undetermined time
in the future.
Note that the fail-fast behavior of an iterator cannot be guaranteed
as it is, generally speaking, impossible to make any hard guarantees
in the presence of unsynchronized concurrent modification. Fail-fast
iterators throw ConcurrentModificationException on a best-effort
basis. Therefore, it would be wrong to write a program that depended
on this exception for its correctness: the fail-fast behavior of
iterators should be used only to detect bugs.
ArrayLists will generally throw concurrent modification exceptions if you modify the list structurally while accessing it through its iterator (but even this is not an absolute guarantee). Note that in your example you are removing elements from the list directly, and you are not using an iterator.
If it tickles your fancy, you can also browse the implementation of ArrayList.remove, to get a better understanding of how it works.
I don't think 'concurrent' means thread-related in this case, or at least it doesn't necessarily mean that. ConcurrentModificationExceptions usually arise from modifying a collection while in the process of iterating over it.
List<String> list = new ArrayList<String>();
for(String s : list)
{
//modifying list results in ConcurrentModificationException
list.add("don't do this");
}
Note that the Iterator<> class has a few methods that can circumvent this:
for(Iterator it = list.iterator(); it.hasNext())
{
//no ConcurrentModificationException
it.remove();
}
The reason you are not receiving a ConcurrentModificationException is that ArrayList.remove does not throw one. You can probably get one by starting an additional thread that iterates through the array:
final List<String> tickets = new ArrayList<String>(100000);
for (int i = 0; i < 100000; i++) {
tickets.add("ticket NO," + i);
}
for (int i = 0; i < 10; i++) {
Thread salethread = new Thread() {
public void run() {
while (tickets.size() > 0) {
tickets.remove(0);
System.out.println(Thread.currentThread().getId()+"Remove 0");
}
}
};
salethread.start();
}
new Thread() {
public void run() {
int totalLength = 0;
for (String s : tickets) {
totalLength += s.length();
}
}
}.start();
Because you are not using an iterator, there is no chance of a ConcurrentModificationException being thrown.
Calling remove(0) will simply remove the first element. It might not be the same element intended by the caller if another thread removes 0 before execution completes.
But it runs OK. No exception is thrown. Why?
Simply because that concurrent modification is permissible.
The description of the exception says this:
"This exception may be thrown by methods that have detected concurrent modification of an object when such modification is not permissible."
The clear implication is that are (or may be) permissible concurrent modifications. And in fact for the standard Java non-concurrent collection classes, concurrent modifications are permitted ... provided that they don't happen during an iteration.
The reasoning behind this is that for the non-concurrent collections, modification while iterating is fundamentally unsafe and unpredictable. Even if you were to synchronize correctly (and that isn't easy1), the result would still be unpredictable. The "fail-fast" checks for concurrent modifications were included in the regular collection classes because this was a common source of Heisenbugs in multi-threaded applications that used the Java 1.1 collection classes.
1- For instance, the "synchronizedXxx" wrapper classes don't, and can't synchronize with iterators. The problem is that iteration involves alternating calls to next() and hasNext(), and the only way to do a pair of method calls while excluding other threads is to use external synchronization. The wrapper approach isn't practical in Java.

std::shared_ptr crashing when used in threads

In thread 1 (paraphrased code):
std::vector<std::shared_ptr<Object>> list;
// Initialization
list.reserve(prop_count);
for (size_t i = 0; i < count; ++i)
{
list.push_back(std::shared_ptr<Object>());
}
// Looped code
for (auto iter = indexes.begin(); iter != indexes.end(); ++iter)
{
uint32_t i = *iter;
std::shared_ptr<Object> item = make_object(table->data[i]); // returns a shared_ptr of Object
list[i].swap(item);
}
in thread 2 (paraphrased code):
for(auto iter = list.begin(); iter != list.end(); ++iter)
{
shared_ptr<Property> o(*iter);
if(o)
{
// some work with casting it
// dynamic_pointer_cast
}
} // <--- crashes here (after o is out of scope)
Here is the call stack:
0x006ea218 C/C++
std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)1>::_M_release(this = 0x505240) C/C++
std::__shared_count<(__gnu_cxx::_Lock_policy)1>::~__shared_count(this = 0xb637dc94) C/C++
std::__shared_ptr<Property, (__gnu_cxx::_Lock_policy)1>::~__shared_ptr(this = 0xb637dc90) C/C++
std::shared_ptr<Property>::~shared_ptr(this = 0xb637dc90) C/C++
startSending() C/C++
libpthread.so.0!start_thread() C/C++
libc.so.6 + 0xb52b8 C/C++
Looking at shared_ptr_base.h, it seems to crash here:
if (__gnu_cxx::__exchange_and_add_dispatch(&_M_use_count, -1) == 1)
{
_GLIBCXX_SYNCHRONIZATION_HAPPENS_AFTER(&_M_use_count);
_M_dispose(); // <--- HERE
I'm not sure how to fix this. Any help is appreciated. Thanks!
From http://en.cppreference.com/w/cpp/memory/shared_ptr with my emphasis added:
If multiple threads of execution access the same shared_ptr
without synchronization and any of those accesses uses a non-const
member function of shared_ptr then a data race will occur; the
shared_ptr overloads of atomic functions can be used to prevent the
data race.
In this case, list[i] and *iter are the same instances.
For thread 1, recommend std::atomic_store(&list[i], item) instead of list[i].swap(item)
For thread 2, recommend std::shared_ptr<Property> o(std::atomic_load(&*iter)) instead of std::shared_ptr<Property> o(*iter);
This all assumes the vector's size doesn't change and introduce issues of the container's thread safety, iterators being invalidated, etc. That's outside the scope of this question though and covered elsewhere.
1) Putting the data onto container : Use a queue, not a vector. Don't reserve and swap, just push() them onto the queue.
2) Each push needs to be protected by a mutex (class member).
====== Second Thread =======
3) pop values of the queue, each pop needs to be protected by the same mutex as above.
See : Using condition variable in a producer-consumer situation
Yeah you could you could add and use a mutex. That would likely work as described.
That defeats the purpose. Another mutex to maintain and a contention point is typical. Prefer atomics to mutexes when at all possible and your mutex free atomic performance will thank you.

Mutex when writing to queue held in map for thread safety

I have a map<int, queue<int>> with one thread writing into it i.e. pushing messages into the queues. They key refers to a client_id, and the queue holds messages for the client. I am looking to make this read-write thread safe.
Currently, the thread that writes into it does something like this
map<int, queue<int>> msg_map;
if (msg_map.find(client_id) != msg_map.end())
{
queue<int> dummy_queue;
dummy_queue.push(msg); //msg is an int
msg_map.insert(make_pair(client_id, dummy_queue);
}
else
{
msg_map[client_id].push(msg);
}
There are many clients reading - and removing - from this map.
if (msg_map.find(client_id) != msg_map.end())
{
if (!msg_map.find(client_id)->second.empty())
{
int msg_rxed = msg_map[client_id].front();
//processing message
msg_map[client_id].pop();
}
}
I am reading this on mutexes (haven't used them before) and I was wondering when and where I ought to lock the mutex. My confusion lies in the fact that they are accessing individual queues (held within the same map). Do I lock the queues, or the map?
Is there a standard/accepted way to do this - and is using a mutex the best way to do this? There are '0s of client threads, and just that 1 single writing thread.
Simplifying and optimizing your code
For now we'll not concern ourselves with mutexes, we'll handle that later when the code is cleaned up a bit (it will be easier then).
First, from the code you showed there seems to be no reason to use an ordered std::map (logarithmic complexity), you could use the much more efficient std::unordered_map (average constant-time complexity). The choice is entirely up to you, if you don't need the container to be ordered you just have to change its declaration:
std::map<int, std::queue<int>> msg_map;
// or
std::unordered_map<int, std::queue<int>> msg_map; // C++11 only though
Now, maps are quite efficient by design but if you insist on doing lookups for each and every operation then you lose all the advantage of maps.
Concerning the writer thread, all your block of code (for the writer) can be efficiently replaced by just this line:
msg_map[client_id].push(msg);
Note that operator[] for both std::map and std::unordered_map is defined as:
Inserts a new element to the container using key as the key and a default constructed mapped value and returns a reference to the newly constructed mapped value. If an element with key key already exists, no insertion is performed and a reference to its mapped value is returned.
Concerning your reader threads, you can't directly use operator[] because it would create a new entry if none currently exists for a specific client_id so instead, you need to cache the iterator returned by find in order to reuse it and thus avoid useless lookups:
auto iter = msg_map.find(client_id);
// iter will be either std::map<int, std::queue<int>>::iterator
// or std::unordered_map<int, std::queue<int>>::iterator
if (iter != msg_map.end()) {
std::queue<int>& q = iter->second;
if (!q.empty()) {
int msg = q.front();
q.pop();
// process msg
}
}
The reason why I pop the message immediately, before processing it, is because it will improve concurrency when we add mutexes (we can unlock the mutex sooner, which is always good).
Making the code thread-safe
#hmjd's idea about multiple locks (one for the map, and one per queue) is interesting, but based on the code you showed us I disagree: any benefit you'll get from the additional concurrency will quite probably be negated by the additional time it takes to lock the queue mutexes (indeed, locking mutexes is a very expensive operation), not to mention the additional code complexity you'll have to handle. I'll bet my money on a single mutex (protecting the map and all the queues at once) being more efficient.
Incidentally, a single mutex solves the iterator invalidation problem if you want to use the more efficient std::unordered_map (std::map doesn't suffer from that problem though).
Assuming C++11, just declare a std::mutex along with your map:
std::mutex msg_map_mutex;
std::map<int, std::queue<int>> msg_map; // or std::unordered_map
Protecting the writer thread is quite straightforward, just lock the mutex before accessing the map:
std::lock_guard<std::mutex> lock(msg_map_mutex);
// the lock is held while the lock_guard object stays in scope
msg_map[client_id].push(msg);
Protecting the reader threads is barely any harder, the only trick is that you'll probably want to unlock the mutex ASAP in order to improve concurrency so you'll have to use std::unique_lock (which can be unlocked early) instead of std::lock_guard (which can only unlock when it goes out of scope):
std::unique_lock<std::mutex> lock(msg_map_mutex);
auto iter = msg_map.find(client_id);
if (iter != msg_map.end()) {
std::queue<int>& q = iter->second;
if (!q.empty()) {
int msg = q.front();
q.pop();
// assuming you don't need to access the map from now on, let's unlock
lock.unlock();
// process msg, other threads can access the map concurrently
}
}
If you can't use C++11, you'll have to replace std::mutex et al. with whatever your platform provides (pthreads, Win32, ...) or with the boost equivalent (which has the advantage of being as portable and as easy to use as the new C++11 classes, unlike the platform-specific primitives).
Read and write access to both the map and the queue need synchronized as both structures are being modified, including the map:
map<int, queue<int>> msg_map;
if (msg_map.find(client_id) != msg_map.end())
{
queue<int> dummy_queue;
dummy_queue.push(msg); //msg is an int
msg_map.insert(make_pair(client_id, dummy_queue);
}
else
{
msg_map[client_id].push(msg); // Modified here.
}
Two options are a mutex that locks both the map and queue or have a mutex for the map and a mutex per queue. The second approach is preferable as it reduces the length of time a single lock is held and means multiple threads can be updating several queues concurrently.