I have an application that continuously std::vector::push_back elements into a vector. As it is a real-time system I cannot afford it to stall at any time. Unfortunately, when the reserved memory is exhausted the push_back automatic memory allocation does cause stalls (up to 800ms in my measurements).
I have tackled the problem by having a second thread that monitors when the available memory and calls a std::vector::reserve if necessary.
My question is: is it safe to execute reserve and push_back concurrently?
(clearly under the assumption that the push_back will not reallocate memory)
Thanks!
It is not thread-safe because a vector is contiguous and if it gets larger then you might need to move the contents of a vector to a different location in memory.
As suggested by stefan, you can look at non-blocking queues or have a list (or vector) of vectors such that when you need more space, the other thread can reserve a new vector for you while not blocking the original for lookups. You would just need to remap your indices to look up into the correct vector within the list.
No.
Thread-safety concept exists only in C++11. And it is defined in relation to const. In C++11 standard library const now means thread-safe. Neither of those 2 methods is const, so no, it is not thread-safe.
Related
Lets say I have a vector of int which I've prefilled with 100 elements with a value of 0.
Then I create 2 threads and tell the first thread to fill elements 0 to 49 with numbers, then tell thread 2 to fill elements 50 to 99 with numbers. Can this be done? Otherwise, what's the best way of achieving this?
Thanks
Yes, this should be fine. As long as you can guarantee that different threads won't modify the same memory location, there's no problem.
Yes, for most implementations of vector, this should be ok to do. That said, this will have very poor performance on most systems, unless you have a very large number of elements and you are accessing elements that are far apart from each other so that they don't live on the same cache line... otherwise, on many systems, the two threads will invalidate each other's caches back-and-forth (if you are frequently reading/writing to those elements), leading to lots of cache misses in both threads.
Since C++11:
Different elements in the same container can be modified concurrently
by different threads, except for the elements of std::vector< bool>
(for example, a vector of std::future objects can be receiving values
from multiple threads).
cppreference discusses the thread safety of containers here in good detail.
Link was found in a Quora post.
The fact that "vector is not thread-safe" doesn't mean anything.
There's no problem with doing this.
Also you don't have to allocate your vector on heap (as one of the answers suggested). You just have to ensure that the lifetime of your vector covers the lifetime of your threads (more precisely - where those threads access the vector).
And, of course, since you want your both threads to work on the same vector - they must receive it from somewhere by pointer/reference rather than by value.
There's also absolutely no problem to access the same element of the array from within different threads. You should know however that your thread is not the only one that accesses it, and treat it respectively.
In simple words - there's no problem to access an array from within different threads.
Accessing the same element from different thread is like accessing a single variable from different thread - same precautions/consequences.
The only situation you have to worry about is when new elements are added, which is impossible in your case.
There is no reason why this cannot be done. But, as soon as you start mixing accesses (both threads accessing the same element) it becomes far more challenging.
vector is not thread safe. You need to guard the vector between threads. In your case it depends on vector implementation. If the vector internal data is accessed\modified from different threads, it purely depends on vector impl.
With arrays it can be done for sure (the threads do not access the same memory area); but as already noted, if you use the std::vector class, the result may depend on how it is implemented. Indeed I don't see how the implementation of [] on the vector class can be thread unsafe (provided the threads try to access different "indexes"), but it could be.
The solution is: stick to the use of an array, or control the access to the vector using a semaphore or similar.
What you describe is quite possible, and should word just fine.
Note, however, that the threads will need to work on a std::vector*, i.e. a pointer to the original vector---and you probably should allocate the vector on the heap, rather than the stack. If you pass the vector directly, the copy constructor will be invoked and create a separate copy of the data on each thread.
There are also lots of more subtle ways to get this wrong, as always with multithreaded programming. But in principle what you described will work well.
This should be fine, but you would end up with poor performance due to false sharing where each thread can potentially invalidate each other's cache line
I have a program where multiple threads share the same data structure which is basically a 2D array of vectors and sometimes two or more threads might have to insert at the same position i.e. vector which might result in a crash if no precautions were taken. What is the fastest and most efficient way to implement a safe solution for this issue ? Since this issue does not happen very often (no high contention) I had a 2D array of mutexes where each mutex maps to a vector and then each thread locks then unlocks the mutex after finishing from updating the corresponding vector. If this is a good solution, I would like to know if there is something faster than mutex to use.
Note, I am using OpenMP for the multithreading.
The solution greatly depends on how the problem is. For example:
If the vector size may exceed its capacity (i.e. reallocation is required).
Whether the vector is only being read, elements are being inserted or elements can be both inserted and removed.
In the first case, you don't have any other possibility than using locks, since you always need to check whether the vector is being reallocated, and wait for the reallocation to complete if necessary.
On the other hand, if you are completely sure that the vector is only initialized once by a single thread (which is not your case), probably you would not need any synchronization mechanism to perform access to vector elements (inside-element access synchronization may still be required though).
If elements are being inserted and removed from the back of the vector only (queue style), then using atomic compare and swap would be enough (atomically increase the size of the vector, and insert in position size-1 when the swap was successful.
If elements may be removed at any point of the vector, its contents may need to be moved to remove empty holes. This case is similar to a reallocation. You can use a customized heap to manage the empty positions in your vector, although this will increase the complexity.
At the end of the day, probably you will need to either develop your own parallel data structure or rely on a library, such as TBB or Boost.
TBB Concurrent Vector we can dynamically resize by using grow_by and grow_to_at_least .And in STL Vector there is also resize function .So what is the difference?
The difference which i came across is
1. A concurrent_vector never moves an element until the array is cleared, which can be an advantage over the STL std::vector(which can move elements to resize the vector), even for single-threaded code.
2. Use concurrent_vector only if you really need to dynamically resize it while other accesses are (or might be) in flight, or if you require that an element never move.
Can anyone please explain these points as i am confused in this?
I get this to mean that once memory is allocated in concurrent_vector it is always used, as opposed to std::vector which allocates twice as much memory when it runs out and moves the objects stored to the newly allocated block.
concurrent_vector, I assume, is adding new blocks of memory but keeps using the old ones.
Not moving objects is important as it allows other threads to keep accessing the vector even as it is being re-sized. It probably also helps with other optimizations (such as keeping cached copies valid.)
The downside is access to the elements is slightly slower as the correct block needs to be found first (one extra deference.)
Here's an explanation of std::vector memory allocation: How is dynamic memory managed in std::vector?
Memory usage in my STL containers is projected to be volatile - that is to say it will frequently shrink and grow. I'm thinking to account for this by specifying an allocator to the STL container type declarations. I understand that pool allocators are meant to handle this type of situation, but my concern is that the volatility will be more than the pool accounts for, and to overcome it I would have to do a lot of testing to determine good pool metrics.
My ideal allocator would never implicitly release memory, and in fact is perfectly acceptable if memory is only ever released upon destruction of the allocator. A member function to explicitly release unused memory would be nice, but not necessary. I know that what I'm referring to sounds like a per-object allocator and this violates the standard. I'd rather stick with the standard, but will abandon it if I can't resolve this within it.
I am less concerned with initial performance and more with average performance. Put another way, it matters less whether a single element or a pool of them is allocated at a time, and more whether said allocation results in a call to new/malloc. I have no problem writing my own allocator, but does anyone know of a preexisting one that accomplishes this? If it makes a difference, this will be for contiguous memory containers (e.g. vector, deque), although a generalized solution would be nice.
I hope this isn't too basic.
Memory will be allocated and freed more for adding items than removing them.
I believe that never "freeing" memory isn't possible unless you know the maximum number of elements allowed by your application. The CRT might try to allocate a larger block of memory in place, but how would you handle the failure cases?
Explaination:
To create a dynamically expanding vector, there will be a larger capacity to handle most push_backs, and a reallocation to handle when this is insufficient.
During the reallocation, a new larger piece of memory is "newed" up
and the elements of the old piece of memory are copied into the new
one.
It is important that you don't hold any iterators while to push_back
elements, because the reallocation will invalidate the memory
locations the iterator point to.
In c++11 and TR1, you might have perfect forwarding where only the
pointer to the elements need to be copied. This is done via a move
constructor instead of a copy constructor.
However, you seem to want to avoid reallocation as much as possible.
Using the default allocator for vector you can specify an initial capacity.
Capacity is the memory allocated and size is the number of elements.
Memory will only be allocated at construction and if the size reaches
capacity. This should only happen with a push_back();
The default allocator will increase the capacity by a multiple (eg.
1.5, 2.0) so that reallocation takes place in linear time. So if you have a loop that pushes back data it's linear. Or if you know the number of elements ahead of time you can allocate once.
There are pool concepts you can explore. The idea with a pool is you are not destroying elements but deactivating them.
If you still want to write your own allocator, this is a good article.
custom allocators
Let's say I have a vector of N elements, but up to n elements of this vector have meaningful data. One updater thread updates the nth or n+1st element (then sets n = n+1), also checks if n is too close to N and calls vector::resize(N+M) if necessary. After updating, the thread calls multiple child threads to read up to nth data and do some calculations.
It is guaranteed that child threads never change or delete data, (in fact no data is deleted what so ever) and updater calls children just after it finishes updating.
So far no problem has occured, but I want to ask whether a problem may occur during reallocating of vector to a larger memory block, if there are some child working threads left from the previous update.
Or is it safe to use vector, as it is not thread-safe, in such a multithreaded case?
EDIT:
Since only insertion takes place when the updater calls vector::resize(N+M,0), are there any possible solutions to my problem? Due to the great performance of STL vector I am not willing to replace it with a lockable vector or in this case are there any performant,known and lock-free vectors?
I want to ask whether a problem may occur during reallocating of vector to a larger memory block, if there are some child working threads left from the previous update.
Yes, this would be very bad.
If you are using a container from multiple threads and at least one thread may perform some action that may modify the state of the container, access to the container must be synchronized.
In the case of std::vector, anything that changes its size (notably, insertions and erasures) change its state, even if a reallocation is not required (any insertion or erasure requires std::vector's internal size bookkeeping data to be updated).
One solution to your problem would be to have the producer dynamically allocate the std::vector and use a std::shared_ptr<std::vector<T> > to own it and give this std::shared_ptr to each of the consumers.
When the producer needs to add more data, it can dynamically allocate a new std::vector with a new, larger size and copies of the elements from the old std::vector. Then, when you spin off new consumers or update consumers with the new data, you simply need to give them a std::shared_ptr to the new std::vector.
Is how your workers decide to work on data thread safe? Is there any signaling between workers done and the producer? If not then there is definitely an issue where the producer could cause the vector to move while it is still being worked on. Though this could trivially be fixed by moving to a std::deque instead.(note that std::deque invalidates iterators on push_back but references to elements are not affected).
I've made my own GrowVector. It works for me and it is really fast.
Link: QList, QVector or std::vector multi-threaded usage