LInux/c++, how to protect two data structures at same time? - c++

I'm developing some programs with c/c++ in Linux. my question is:
I have an upper-level class called Vault, inside of which there are an OrderMap which uses unordered_map as data structure and a OrderBook, which has 2 std::list in side.
Both the OrderMap and OrderBook stores Order* as the element, they share an Order object which is allocated on heap. So either the OrderBook or the OrderMap modifies the order within.
I have two threads that will do read and write operations on them. the 2 threads can insert/modify/retrieve(read)/delete the elements.
My Question is: how can I protect this big structure of "Vault"? I can actually protect the map or list, but I don't know how to protect them both at the same time.
anybody gives me some ideas?

Add a mutex and lock it before accessing any of the two.
Make them private, so you know accesses are made via your member functions (which have proper locks)
Consider using std::shared_ptr instead of Order *

I don't think I've ever seen a multi-thread usage of an order book. I really think you'd be better off using it in one thread only.
But, to get back to your question, I'll assume you're staying with 2 threads for whatever reasons.
This data structure is too complex to make it lock free. So these are the multi-thread options I can see:
1. If you use a single Vault instance for both threads you will have to lock around it. I assume you don't care to burn some cpu time, so I strongly suggest you use a spin lock, not a mutex.
2. If you can allow having 2 Vault instances this can improve things as every thread can keep it's own private instance and exchange modifications with the other thread using some other means: a lock free queue, or something else.
3. If your book is fast enough to copy, you can have a single central Vault pointer, make a copy on every update, or a group of updates, CAS onto that central pointer, and reuse the old one to not have to allocate every time. You'll end up with one spare instance per thread. Like this:
Vault* old_ptr = centralVault;
Vault* new_ptr = cachedVault ? cachedVault : new Vault;
do {
*new_ptr = *old_ptr;
makeChanges(new_ptr);
}
while( !cas(&centralVault, old_ptr, new_ptr) );
cachedVault = old_ptr;

Related

How to safely access and write to a complex container from multiple threads in parallel?

I have a case where there is an unordered_map of structs. The struct contains int(s), bool(s) and a vector. My program will fetch data for each item in the map either through a https call to a server or using websocket (seperate https calls are required for each item in map). When using websocket, data for all items in the map is returned together. The fetched data is processed and stored in respective vectors.
The websocket is running in a seperate thread and should run throughout the lifetime of the program.
My program has a delete function which can "empty" the entire map. There is also a addItem() function, which will add new struct to my map.
Whenever "updatesOn" member of struct is false, no data is pushed into the vector.
My current implementation has 3 threads:
main thread will add new items to the map. Another function of main thread is to fetch data from vector in struct. Main thread has a function to empty the map and start again. It has another function which only empties the vector.
second thread will run websocket client and fills up vector in struct as new data arrives. There is a while loop which checks for exit flag. Once exit flag is set in main thread, this thread terminates.
third thread is the manager thread. It looks for new entries in map and does http download and then add this item to websocket for subsequent data updates. It also runs http downloads at regular interval, empties vector and refills it.
Right now I have two mutex.
One for locking before data is written/read to/from the vector.
Second mutex is when new data is added or removed from the map. Also to use when the map is emptied.
I sense this is wrong usage of mutex. As I may empty the map when one of the vector element of its struct is being read or written to. This brings me to use one mutex for all.
The problem is this is a realtime stock data program, i.e. new data pops in every second, sometimes even faster. I am afraid one mutex lock for all could slow down my entire app.
As described above, all 3 threads have write access to this map, with the main thread capable of emptying it complete.
Keeping in mind speed and thread safety, What would be a good way to implement this?
My data members:
unordered_map<string, tickerDiary> tDiaries;
struct tickerDiary {
tickerDiary() : name(""), ohlcPeriodicity("minute"), ohlcStatus(false), updatesOn(true), ohlcDayBarIndex(0), rtStatus(false) {}
string name;
string ohlcPeriodicity;
bool ohlcStatus;
bool rtStatus;
bool updatesOn;
int32 ohlcDayBarIndex;
vector<Quotation> data;
};
struct Quotation {
union AmiDate DateTime;
float Price;
float Open;
float High;
float Low;
float Volume;
float OpenInterest;
float AuxData1;
float AuxData2;
};
Note: I am using C++11.
If I understand your question correctly, your map itself is primarily written in the main thread, and the other threads are only used to operate on the data contained within entries in the map.
Given that, for the non-main threads there are two concerns:
The item that they work on should not randomly disappear
They should be the only one working on their item.
The first of these can most efficiently be solved by decoupling the storage from the map. So for each item, storage is allocated separately (either through the default allocator, or some pooling scheme if you add/remove items a lot), and the map only stores a shared ptr. Then each thread working on an item just needs to keep around a shared ptr to make sure that the storage will not disappear out from under them. Then acquiring the map's associated mutex/shared_mutex is only necessary for the duration of the fetch/store/remove of the pointers. This will then work okay so long as it is acceptable that some threads may waste some time doing actions on items already removed from the map. Using shared_ptrs will make sure you wont leak memory by using reference counters, and they will also do the locking/unlocking for these refcounts (or rather, try to use more efficient platform primitives for these). If you want to know more on shared_ptr, and smart pointers in general, this is a reasonable introduction to the c++ system of smart pointers.
That leaves the second problem, which is probably most easily resolved by keeping a mutex in the data struct (tickerDiary) itself, that threads acquire when starting to do operations that require predictable behavior from the struct, and can be released after they have done what they should do.
Separating the locking this way should reduce the amount of contention on the global lock for the map. However, you should probably benchmark your code to see whether that reduction is worth it given the extra costs of the allocations and refcounts for the individual items.
I don't think using std::vector is the right collection here. But if you insist on using it you should just have one mutex for each collection.
I would recommend concurrent_vector from INTEL TBB or a synchronized data structure from boost.
A third solution could be implementing your own concurrent vector

Should I use different mutexes for different objects?

I am new to threading . Correct me if I am wrong that mutex locks the access to a shared data structure so that it cannot be used by other threads until it is unlocked . So, lets consider that there are 2 or more shared data structures . So , should I make different mutex objects for different data structures ? If no ,then how std::mutex will know which object it should lock ? What If I have to lock more than 1 objects at the same time ?
There are several points in your question that can be made more precise. Perhaps clearing this will solve things for you.
To begin with, a mutex, by itself, does not lock access to anything. It is basically something that your code can lock and unlock, and some "magic" ensures that only one thread can lock it at a time.
If, by convention, you decide that any code accessing some data structure foo will first begin by locking a mutex foo_mutex, then it will have the effect of protecting this data structure.
So, having said that, regarding your questions:
It depends on whether the two data structures need to be accessed together or not (e.g., can updating one without the other leave the system in an inconsistent state). If so, you should lock them with a single mutex. If not, you can improve parallelism by using two.
The mutex does not lock anything. It is you who decide by convention whether you can access 1, 2, or a million data structures, while holding it.
If you always needs to access both structures then it could be considered as a single resource so only a single lock is needed.
If you sometimes, even just once, need to access one of the structures independently then they can no longer be considered a single resource and you might need two locks. Of course, a single lock could still be sufficient, but then that lock would lock both resources at once, prohibiting other threads from accessing any of the structures.
Mutex does not "know" anything other than about itself. The lock is performed on mutex itself.
If there are two objects (or pieces of code) that need synchronized access (but can be accessed at the same time) then you have the liberty to use just one mutex for both or one for each. If you use one mutex they will not be accessed at the same time from two different threads.
If it cannot happen that access to one object is required while accessing the other object then you can use two mutexes, one for each. But if it can happen that one object must be accessed while the thread already holds another mutex then care must be taken that code never can reach a deadlock, where two threads hold one mutex each, and both at the same time wait that the other mutex is released.

passing "this" to a thread c++

What is the best way of performing the following in C++. Whilst my current method works I'm not sure it's the best way to go:
1) I have a master class that has some function in it
2) I have a thread that takes some instructions on a socket and then runs one of the functions in the master class
3) There are a number of threads that access various functions in the master class
I create the master class and then create instances of the thread classes from the master. The constructor for the thread class gets passed the "this" pointer for the master. I can then run functions from the master class inside the threads - i.e. I get a command to do something which runs a function in the master class from the thread. I have mutex's etc to prevent race problems.
Am I going about this the wrong way - It kinda seems like the thread classes should inherit the master class or another approach would be to not have separate thread classes but just have them as functions of the master class but that gets ugly.
Sounds good to me. In my servers, it is called 'SCB' - ServerControlBlock - and provides access to services like the IOCPbuffer/socket pools, logger, UI access for status/error messages and anything else that needs to be common to all the handler threads. Works fine and I don't see it as a hack.
I create the SCB, (and ensure in the ctor that all services accessed through it are started and ready for use), before creating the thread pool that uses the SCB - no nasty singletonny stuff.
Rgds,
Martin
Separate thread classes is pretty normal, especially if they have specific functionality. I wouldn't inherit from the main thread.
Passing the this pointer to threads is not, in itself, bad. What you do with it can be.
The this pointer is just like any other POD-ish data type. It's just a chunk of bits. The stuff that is in this might be more than PODs however, and passing what is in effect a pointer to it's members can be dangerous for all the usual reasons. Any time you share anything across threads, it introduces potential race conditions and deadlocks. The elementary means to resolve those conflicts is, of course, to introduce synchronization in the form of mutexes, semaphores, etc, but this can have the suprising effect of serializing your application.
Say you have one thread reading data from a socket and storing it to a synchronized command buffer, and another thread which reads from that command buffer. Both threads use the same mutex, which protects the buffer. All is well, right?
Well, maybe not. Your threads could become serialized if you're not very careful with how you lock the buffer. Presumably you created separate threads for the buffer-insert and buffer-remove codes so that they could run in parallel. But if you lock the buffer with each insert & each remove, then only one of those operations can be executing at a time. As long as your writing to the buffer, you can't read from it and vice versa.
You can try to fine-tune the locks so that they are as brief as possible, but so long as you have shared, synchronized data, you will have some degree of serialization.
Another approach is to hand data off to another thread explicitly, and remove as much data sharing as possible. Instead of writing to and reading from a buffer as in the above, for example, your socket code might create some kind of Command object on the heap (eg Command* cmd = new Command(...);) and pass that off to the other thread. (One way to do this in Windows is via the QueueUserAPC mechanism).
There are pros & cons to both approaches. The synchronization method has the benefit of being somewhat simpler to understand and implement at the surface, but the potential drawback of being much more difficult to debug if you mess something up. The hand-off method can make many of the problems inherent with synchronization impossible (thereby actually making it simpler), but it takes time to allocate memory on the heap.

How to synchronize access to many objects

I have a thread pool with some threads (e.g. as many as number of cores) that work on many objects, say thousands of objects. Normally I would give each object a mutex to protect access to its internals, lock it when I'm doing work, then release it. When two threads would try to access the same object, one of the threads has to wait.
Now I want to save some resources and be scalable, as there may be thousands of objects, and still only a hand full of threads. I'm thinking about a class design where the thread has some sort of mutex or lock object, and assigns the lock to the object when the object should be accessed. This would save resources, as I only have as much lock objects as I have threads.
Now comes the programming part, where I want to transfer this design into code, but don't know quite where to start. I'm programming in C++ and want to use Boost classes where possible, but self written classes that handle these special requirements are ok. How would I implement this?
My first idea was to have a boost::mutex object per thread, and each object has a boost::shared_ptr that initially is unset (or NULL). Now when I want to access the object, I lock it by creating a scoped_lock object and assign it to the shared_ptr. When the shared_ptr is already set, I wait on the present lock. This idea sounds like a heap full of race conditions, so I sort of abandoned it. Is there another way to accomplish this design? A completely different way?
Edit:
The above description is a bit abstract, so let me add a specific example. Imagine a virtual world with many objects (think > 100.000). Users moving in the world could move through the world and modify objects (e.g. shoot arrows at monsters). When only using one thread, I'm good with a work queue where modifications to objects are queued. I want a more scalable design, though. If 128 core processors are available, I want to use all 128, so use that number of threads, each with work queues. One solution would be to use spatial separation, e.g. use a lock for an area. This could reduce number of locks used, but I'm more interested if there's a design which saves as much locks as possible.
You could use a mutex pool instead of allocating one mutex per resource or one mutex per thread. As mutexes are requested, first check the object in question. If it already has a mutex tagged to it, block on that mutex. If not, assign a mutex to that object and signal it, taking the mutex out of the pool. Once the mutex is unsignaled, clear the slot and return the mutex to the pool.
Without knowing it, what you were looking for is Software Transactional Memory (STM).
STM systems manage with the needed locks internally to ensure the ACI properties (Atomic,Consistent,Isolated). This is a research activity. You can find a lot of STM libraries; in particular I'm working on Boost.STM (The library is not yet for beta test, and the documentation is not really up to date, but you can play with). There are also some compilers that are introducing TM in (as Intel, IBM, and SUN compilers). You can get the draft specification from here
The idea is to identify the critical regions as follows
transaction {
// transactional block
}
and let the STM system to manage with the needed locks as far as it ensures the ACI properties.
The Boost.STM approach let you write things like
int inc_and_ret(stm::object<int>& i) {
BOOST_STM_TRANSACTION {
return ++i;
} BOOST_STM_END_TRANSACTION
}
You can see the couple BOOST_STM_TRANSACTION/BOOST_STM_END_TRANSACTION as a way to determine a scoped implicit lock.
The cost of this pseudo transparency is of 4 meta-data bytes for each stm::object.
Even if this is far from your initial design I really think is what was behind your goal and initial design.
I doubt there's any clean way to accomplish your design. The problem that assigning the mutex to the object looks like it'll modify the contents of the object -- so you need a mutex to protect the object from several threads trying to assign mutexes to it at once, so to keep your first mutex assignment safe, you'd need another mutex to protect the first one.
Personally, I think what you're trying to cure probably isn't a problem in the first place. Before I spent much time on trying to fix it, I'd do a bit of testing to see what (if anything) you lose by simply including a Mutex in each object and being done with it. I doubt you'll need to go any further than that.
If you need to do more than that I'd think of having a thread-safe pool of objects, and anytime a thread wants to operate on an object, it has to obtain ownership from that pool. The call to obtain ownership would release any object currently owned by the requesting thread (to avoid deadlocks), and then give it ownership of the requested object (blocking if the object is currently owned by another thread). The object pool manager would probably operate in a thread by itself, automatically serializing all access to the pool management, so the pool management code could avoid having to lock access to the variables telling it who currently owns what object and such.
Personally, here's what I would do. You have a number of objects, all probably have a key of some sort, say names. So take the following list of people's names:
Bill Clinton
Bill Cosby
John Doe
Abraham Lincoln
Jon Stewart
So now you would create a number of lists: one per letter of the alphabet, say. Bill and Bill would go in one list, John, Jon Abraham all by themselves.
Each list would be assigned to a specific thread - access would have to go through that thread (you would have to marshall operations to an object onto that thread - a great use of functors). Then you only have two places to lock:
thread() {
loop {
scoped_lock lock(list.mutex);
list.objectAccess();
}
}
list_add() {
scoped_lock lock(list.mutex);
list.add(..);
}
Keep the locks to a minimum, and if you're still doing a lot of locking, you can optimise the number of iterations you perform on the objects in your lists from 1 to 5, to minimize the amount of time spent acquiring locks. If your data set grows or is keyed by number, you can do any amount of segregating data to keep the locking to a minimum.
It sounds to me like you need a work queue. If the lock on the work queue became a bottle neck you could switch it around so that each thread had its own work queue then some sort of scheduler would give the incoming object to the thread with the least amount of work to do. The next level up from that is work stealing where threads that have run out of work look at the work queues of other threads.(See Intel's thread building blocks library.)
If I follow you correctly ....
struct table_entry {
void * pObject; // substitute with your object
sem_t sem; // init to empty
int nPenders; // init to zero
};
struct table_entry * table;
object_lock (void * pObject) {
goto label; // yes it is an evil goto
do {
pEntry->nPenders++;
unlock (mutex);
sem_wait (sem);
label:
lock (mutex);
found = search (table, pObject, &pEntry);
} while (found);
add_object_to_table (table, pObject);
unlock (mutex);
}
object_unlock (void * pObject) {
lock (mutex);
pEntry = remove (table, pObject); // assuming it is in the table
if (nPenders != 0) {
nPenders--;
sem_post (pEntry->sem);
}
unlock (mutex);
}
The above should work, but it does have some potential drawbacks such as ...
A possible bottleneck in the search.
Thread starvation. There is no guarantee that any given thread will get out of the do-while loop in object_lock().
However, depending upon your setup, these potential draw-backs might not matter.
Hope this helps.
We here have an interest in a similar model. A solution we have considered is to have a global (or shared) lock but used in the following manner:
A flag that can be atomically set on the object. If you set the flag you then own the object.
You perform your action then reset the variable and signal (broadcast) a condition variable.
If the acquire failed you wait on the condition variable. When it is broadcast you check its state to see if it is available.
It does appear though that we need to lock the mutex each time we change the value of this variable. So there is a lot of locking and unlocking but you do not need to keep the lock for any long period.
With a "shared" lock you have one lock applying to multiple items. You would use some kind of "hash" function to determine which mutex/condition variable applies to this particular entry.
Answer the following question under the #JohnDibling's post.
did you implement this solution ? I've a similar problem and I would like to know how you solved to release the mutex back to the pool. I mean, how do you know, when you release the mutex, that it can be safely put back in queue if you do not know if another thread is holding it ?
by #LeonardoBernardini
I'm currently trying to solve the same kind of problem. My approach is create your own mutex struct (call it counterMutex) with a counter field and the real resource mutex field. So every time you try to lock the counterMutex, first you increment the counter then lock the underlying mutex. When you're done with it, you decrement the coutner and unlock the mutex, after that check the counter to see if it's zero which means no other thread is trying to acquire the lock . If so put the counterMutex back to the pool. Is there a race condition when manipulating the counter? you may ask. The answer is NO. Remember you have a global mutex to ensure that only one thread can access the coutnerMutex at one time.

Thread Safe Access to Data Shared Between Objects

I'm something of an intermediate programmer, but relatively a novice to multi-threading.
At the moment, I'm working on an application with a structure similar to the following:
class Client
{
public:
Client();
private:
// These are all initialised/populated in the constrcutor.
std::vector<struct clientInfo> otherClientsInfo;
ClientUI* clientUI;
ClientConnector* clientConnector;
}
class ClientUI
{
public:
ClientUI(std::vector<struct clientInfo>* clientsInfo);
private:
// Callback which gets new client information
// from a server and pushes it into the otherClientsInfo vector.
synchClientInfo();
std::vector<struct clientInfo>* otherClientsInfo;
}
class ClientConnector
{
public:
ClientConnector(std::vector<struct clientInfo>* clientsInfo);
private:
connectToClients();
std::vector<struct clientInfo>* otherClientsInfo;
}
Somewhat a contrived example, I know. The program flow is this:
Client is constructed and populates otherClientsInfo and constructs clientUI and clientConnector with a pointer to otherClientsInfo.
clientUI calls synchClientInfo() anytime the server contacts it with new client information, parsing the new data and pushing it back into otherClientsInfo or removing an element.
clientConnector will access each element in otherClientsInfo when connectToClients() is called but won't alter them.
My first question is whether my assumption that if both ClientUI and ClientConnector access otherClientsInfo at the same time, will the program bomb out because of thread-unsafety?
If this is the case, then how would I go about making access to otherClientsInfo thread safe, as in perhaps somehow locking it while one object accesses it?
My first question is whether my assumption that if both ClientUI and ClientConnector access otherClientsInfo at the same time, will the program bomb out because of thread-unsafety?
Yes. Most implementations of std::vector do not allow concurrent read and modification. ( You'd know if you were using one which did )
If this is the case, then how would I go about making access to otherClientsInfo thread safe, as in perhaps somehow locking it while one object accesses it?
You would require at least a lock ( either a simple mutex or critical section or a read/write lock ) to be held whenever the vector is accessed. Since you've only one reader and writer there's no point having a read/write lock.
However, actually doing that correctly will get increasingly difficult as you are exposing te vector to the other classes, so will have to expose the locking primitive too, and remember to acquire it whenever you use the vector. It may be better to expose addClientInfo, removeClientInfo and const and non-const foreachClientInfo functions which encapsulate the locking in the Client class rather than having disjoint bits of the data owned by the client floating around the place.
See
Reader/Writer Locks in C++
and
http://msdn.microsoft.com/en-us/library/ms682530%28VS.85%29.aspx
The first one is probably a bit advanced for you. You can start with the Critical section (link 2).
I am assuming you are using Windows.
if both ClientUI and ClientConnector access otherClientsInfo at the same time, will the program bomb out because of thread-unsafety?
Yes, STL containers are not thread-safe.
If this is the case, then how would I go about making access to otherClientsInfo thread safe, as in perhaps somehow locking it while one object accesses it?
In the most simple case a mutual exclusion pattern around the access to the shared data... if you'd have multiple readers however, you would go for a more efficient pattern.
Is clientConnector called from the same thread as synchClientInfo() (even if it is all callback)?
If so, you don't need to worry about thread safety at all.
If you want to avoid simultanous access to the same data, you can use mutexes to protect the critical section. For exmample, mutexes from Boost::Thread
In order to ensure that access to the otherClientsInfo member from multiple threads is safe, you need to protect it with a mutex. I wrote an article about how to directly associate an object with a mutex in C++ over on the Dr Dobb's website:
http://www.drdobbs.com/cpp/225200269