Simple C++ container class that is thread-safe for writing - c++

I am writing a multi-threaded program using OpenMP in C++. At one point my program forks into many threads, each of which need to add "jobs" to some container that keeps track of all added jobs. Each job can just be a pointer to some object.
Basically, I just need the add pointers to some container from several threads at the same time.
Is there a simple solution that performs well? After some googling, I found that STL containers are not thread-safe. Some stackoverflow threads address this question, but none that forms a consensus on a simple solution.

There's no built-in way to do this. You can simply use a lock to guard one of the existing container types. It might be a better idea to have each thread use it's own container, then combine the results together in the end.

Using a mutex or similar synchronization primitive to control access to a linked list is not very difficult, so I'd recommend you try that first.
If it performs so poorly that you can't use it, try this instead: give each thread its own job queue, and have the job consumer check all the queues in turn. This way each queue has only one reader and one writer, so a lock-free implementation is relatively straightforward. By this I mean it may exist for your platform; you should not attempt to write it yourself.

Related

Multiple processes pushing elements to list STL C++

I have multiple preforked server processes which accept requests to modify a shared STL C++ list on a server. Each process simply pushes a new element at the end of the list and returns the iterator.
I'm not sure how should each process attempt to acquire lock on the list? Should it be on entire object or are STL Lists capable of handling concurrency since we're just pushing an element at the end of the list?
Assuming you meant threads rather than processes you can share the STL containers but you need to be careful with respect to synchronization. The STL containers are threads safe to some extend but you need to understand the thread safety guarantees given:
One container can be used by multiple readers concurrently.
If there is one writer for a container, there shall neither be concurrent readers nor concurrent writers.
The guarantees are per container, i.e., different containers can concurrently be used by threads without need of synchronization between them.
The reason for these restrictions is that the interface for the containers is geared towards efficient use within one thread and you don't want to impeded the processing of an unshared container with the potential of being shared across threads. Also, the container interface isn't suitable for any sort of container maintained concurrency mechanism. For example, just because v.empty() just returned false it doesn't mean that v.pop() works because the container can be empty by now: If there were internal synchronization any lock would have been released once empty() returned and the container can be changed by the time pop() is called.
It is relatively easy to create a queue to be used for communication between different threads. It would use a std::mutex and a suitable instantiation of std::condition_variable. I think there is something like this proposed for inclusion into the standard but it isn't, yet, part of the standard C++ library. Note, however, that such a class would not return an iterator to the inserted element because by the time you'd access it, the element may be gone again and it would be questionable what the iterator is used for anyway.
The mechanism for doing this kind of synchronisation between multiple processes requires that the developer deal with several issues. Firstly whatever is being shared between the processes needs to be set up outside of them. What this usually means in practice is the use of shared memory.
Then these processes need to communicate with each other with respect to accessing the memory being shared. After all if one thread starts to work on a data structure being shared, but gets swapped out before completing the operation it will leave the data inconsistent.
This synchronisation can be done using operating system constructs such as semaphores in linux, and will allow competing processes to coordinate.
See This for linux based IPC detail
See This for Windows based IPC detail
For some reference you can use the Boost.Interprocess documentation which provides a platform independent implementation of IPC mechanisms.
The standard library containers offer no automagic protection against concurrent modifications, so you need a global lock for every access of the queue.
You even have to be careful with the iterators or references to list elements, since you may not necessarily know when the corresponding element has been removed from the list.

Is there in stl or boost thread safe structure for inter thread communication - with behavior like queue?

I have game and I have two threads , one generates custom class and needs to store that (I put to push that in queue but I am not sure if that is thread safe, first thread generates every 50ms new instance, and second can read faster if there is any or slower - speed changes over time) . Another thread uses if queue is not empty , pop first and calculates some things. Is there any data structure thread safe for this problem in stl or boost ?
Using std::queue or any similar container will not be thread safe. If you want your access (push/pop) to be thread-safe, while using std::queue, you should use boost::mutex or a similar mechanism to lock before each access. You can look at boost::shared_mutex if you need immutable reads from more than one thread (not sure you need that based on what you described).
Apart from that, you can take a look at boost::interprocess::message_queue, as someone has already mentioned -> http://www.boost.org/doc/libs/1_50_0/boost/interprocess/ipc/message_queue.hpp for the most recent version of boost.
Moreover, there is the concept of lock-free queues en.wikipedia.org/wiki/Non-blocking_algorithm. I cannot provide an example of such implementation but I am sure you can find some if you google around.

STL containers thread-safeness for producer/consumer pattern

I am planning to do the following:
store a deque of pre-built objects to be consumed. The main thread might consume these objects here and there. I have another junky thread used for logging and other not time-critical but expensive things. When the pre-built objects are running low, I will refill them in the junky thread.
Now my question is, is there going to be race condition here? Technically one thread is consuming objects from the front, and another thread is pushing objects into the back. As long as I don't let the size run down to zero, it should be fine. The only thing that concerns me is the "size" of this deque. Do they store a integer "size" variable in STL containers? should modifying that size variable introduce race conditions?
What's the best way of solving this problem? I don't really want to use locks, because the main thread is performance critical (the reason I pre-built these objects in the first place!)
STL containers are not thread safe, period, don't play with this. Specifically the deque elements are usually stored in a chain of short arrays and that chain will be modified when operating with the deque, so there's a lot of room for messing things up.
Another option would be to have 2 deques, one for read another for write. The main thread reads, and the other writes. When the read deque is empty, switch the deques (just move 2 pointers), which would involve a lock, but only occasionally.
The consumer thread would drive the switch so it would only need to do a lock when switching. The producer thread would need to lock per write in case the switch happens in the middle of a write, but as you mention the consumer is less performance-critical, so no worries there.
What you're suggesting regarding no locks is indeed dangerous as others mention.
As #sharptooth mentioned, STL containers aren't thread-safe. Are you using a C++11 capable compiler? If so, you could implement a lock-free queue using atomic types. Otherwise you'd need to use assembler for compare-and-swap or use a platform specific API (see here). See this question to get information on how to do this.
I would emphasise that you should measure performance when using standard thread synchronisation and see if you do actually need a lock-free technique.
There will be a data race even with non-empty deque.
You'll have to protect all accesses (not just writes) to the deque through locks, or use a queue specifically designed for consumer-producer model in multi-threaded environment (such as Microsoft's unbounded_buffer).

Is checking current thread inside a function ok?

Is it ok to check the current thread inside a function?
For example if some non-thread safe data structure is only altered by one thread, and there is a function which is called by multiple threads, it would be useful to have separate code paths depending on the current thread. If the current thread is the one that alters the data structure, it is ok to alter the data structure directly in the function. However, if the current thread is some other thread, the actual altering would have to be delayed, so that it is performed when it is safe to perform the operation.
Or, would it be better to use some boolean which is given as a parameter to the function to separate the different code paths?
Or do something totally different?
What do you think?
You are not making all too much sense. You said a non-thread safe data structure is only ever altered by one thread, but in the next sentence you talk about delaying any changes made to that data structure by other threads. Make up your mind.
In general, I'd suggest wrapping the access to the data structure up with a critical section, or mutex.
It's possible to use such animals as reader/writer locks to differentiate between readers and writers of datastructures but the performance advantage for typical cases usually wont merit the additional complexity associated with their use.
From the way your question is stated, I'm guessing you're fairly new to multithreaded development. I highly suggest sticking with the simplist and most commonly used approaches for ensuring data integrity (most books/articles you readon the issue will mention the same uses for mutexes/critical sections). Multithreaded development is extremely easy to get wrong and can be difficult to debug. Also, what seems like the "optimal" solution very often doesn't buy you the huge performance benefit you might think. It's usually best to implement the simplist approach that will work then worry about optimizing it after the fact.
There is a trick that could work in case, as you said, the other threads will only make changes only once in a while, although it is still rather hackish:
make sure your "master" thread can't be interrupted by the other ones (higher priority, non fair scheduling)
check your thread
if "master", just change
if other, put off scheduling, if needed by putting off interrupts, make change, reinstall scheduling
really test to see whether there are no issues in your setup.
As you can see, if requirements change a little bit, this could turn out worse than using normal locks.
As mentioned, the simplest solution when two threads need access to the same data is to use some synchronization mechanism (i.e. critical section or mutex).
If you already have synchronization in your design try to reuse it (if possible) instead of adding more. For example, if the main thread receives its work from a synchronized queue you might be able to have thread 2 queue the data structure update. The main thread will pick up the request and can update it without additional synchronization.
The queuing concept can be hidden from the rest of the design through the Active Object pattern. The activ object may also be able to publish the data structure changes through the Observer pattern to other interested threads.

Are STL Map or HashMaps thread safe?

Can I use a map or hashmap in a multithreaded program without needing a lock?
i.e. are they thread safe?
I'm wanting to potentially add and delete from the map at the same time.
There seems to be a lot of conflicting information out there.
By the way, I'm using the STL library that comes with GCC under Ubuntu 10.04
EDIT: Just like the rest of the internet, I seem to be getting conflicting answers?
You can safely perform simultaneous read operations, i.e. call const member functions. But you can't do any simultaneous operations if one of then involves writing, i.e. call of non-const member functions should be unique for the container and can't be mixed with any other calls.
i.e. you can't change the container from multiple threads. So you need to use lock/rw-lock
to make the access safe.
No.
Honest. No.
edit
Ok, I'll qualify it.
You can have any number of threads reading the same map. This makes sense because reading it doesn't have any side-effects, so it can't matter whether anyone else is also doing it.
However, if you want to write to it, then you need to get exclusive access, which means preventing any other threads from writing or reading until you're done.
Your original question was about adding and removing in parallel. Since these are both writes, the answer to whether they're thread-safe is a simple, unambiguous "no".
TBB is a free open-source library that provides thread-safe associative containers. (http://www.threadingbuildingblocks.org/)
The most commonly used model for STL containers' thread safety is the SGI one:
The SGI implementation of STL is thread-safe only in the sense that
simultaneous accesses to distinct
containers are safe, and simultaneous
read accesses to to shared containers
are safe.
but in the end it's up to the STL library authors - AFAIK the standard says nothing about STL's thread-safety.
But according to the docs GNU's stdc++ implementation follows it (as of gcc 3.0+), if a number of conditions are met.
HIH
The answer (like most threading problems) is it will work most of the time. Unfortunately if you catch the map while it's resizing then you're going to end up in trouble. So no.
To get the best performance you'll need a multi stage lock. Firstly a read lock which allows accessors which can't modify the map and which can be held by multiple threads (more than one thread reading items is ok). Secondly a write lock which is exclusive which allows modification of the map in ways that could be unsafe (add, delete etc..).
edit Reader-writer locks are good but whether they're better than standard mutex depends on the usage pattern. I can't recommend either without knowing more. Profile both and see which best fits your needs.