multithreading in c in a mmorpg - c++

I want to use multithreading in my mmorpg in c++, ive got 5 threads at the moment, and i want to split another one in two, but my mmorpg server conists of loads of vectors, and because the vectors arn't threadsafe to write, i can't do it properly.
Is there an alternative to using vectors across threads, or is there a way to make the vector read/write multithread safe.
Heres an example of what i wan't, try to find an alternative to something like this:
Obviously this isn't actual code, im just making an example.
//Thread1
//Load monster and send data to the player
globals::monstername[myid];//Myid = 1 for now -.-
senddata(globals::monstername[myid]);//Not the actual networking code, im just lazy.
//Thread2
//Create a monster and manage it
globals::monstername.push_back("FatBlobMonster");
//More managing code i cant be bothered inserting >.<

Two things.
Don't store shared data in one big data structure that gets locked completely. Lock just parts of it. For example, if you must use vectors, then create a set of locks for regions of the vector. Say I have 1000 entries, I might create 10 locks, that each lock 100 consecutive entries. But you can probably do better. For example, store your monsters in a hash table, where each "bucket" in your hash table has its own lock.
Use a "read/write" lock. It is possible to create a type of lock that allows multiple readers and a single writer. So each hash bucket might have a read write lock. If no monsters are being created in a particular bucket, then multiple threads can be reading monsters out of that bucket. If you need to hash a new monster into the bucket then you lock the bucket for writing. This lock will wait for all current readers to release, and not allow more readers to lock until the write is complete. Once there are no more readers, the w rite operation succeeeds

I am not aware of any thread safe vector class. However, you could create one yourself that uses std::vector and a std::mutex (in C++11):
template <typename T>
class LockedVector {
private:
std::mutex m;
std::vector<T> vec;
};
You lock the mutex with std::lock.

Related

How to safely access and write to a complex container from multiple threads in parallel?

I have a case where there is an unordered_map of structs. The struct contains int(s), bool(s) and a vector. My program will fetch data for each item in the map either through a https call to a server or using websocket (seperate https calls are required for each item in map). When using websocket, data for all items in the map is returned together. The fetched data is processed and stored in respective vectors.
The websocket is running in a seperate thread and should run throughout the lifetime of the program.
My program has a delete function which can "empty" the entire map. There is also a addItem() function, which will add new struct to my map.
Whenever "updatesOn" member of struct is false, no data is pushed into the vector.
My current implementation has 3 threads:
main thread will add new items to the map. Another function of main thread is to fetch data from vector in struct. Main thread has a function to empty the map and start again. It has another function which only empties the vector.
second thread will run websocket client and fills up vector in struct as new data arrives. There is a while loop which checks for exit flag. Once exit flag is set in main thread, this thread terminates.
third thread is the manager thread. It looks for new entries in map and does http download and then add this item to websocket for subsequent data updates. It also runs http downloads at regular interval, empties vector and refills it.
Right now I have two mutex.
One for locking before data is written/read to/from the vector.
Second mutex is when new data is added or removed from the map. Also to use when the map is emptied.
I sense this is wrong usage of mutex. As I may empty the map when one of the vector element of its struct is being read or written to. This brings me to use one mutex for all.
The problem is this is a realtime stock data program, i.e. new data pops in every second, sometimes even faster. I am afraid one mutex lock for all could slow down my entire app.
As described above, all 3 threads have write access to this map, with the main thread capable of emptying it complete.
Keeping in mind speed and thread safety, What would be a good way to implement this?
My data members:
unordered_map<string, tickerDiary> tDiaries;
struct tickerDiary {
tickerDiary() : name(""), ohlcPeriodicity("minute"), ohlcStatus(false), updatesOn(true), ohlcDayBarIndex(0), rtStatus(false) {}
string name;
string ohlcPeriodicity;
bool ohlcStatus;
bool rtStatus;
bool updatesOn;
int32 ohlcDayBarIndex;
vector<Quotation> data;
};
struct Quotation {
union AmiDate DateTime;
float Price;
float Open;
float High;
float Low;
float Volume;
float OpenInterest;
float AuxData1;
float AuxData2;
};
Note: I am using C++11.
If I understand your question correctly, your map itself is primarily written in the main thread, and the other threads are only used to operate on the data contained within entries in the map.
Given that, for the non-main threads there are two concerns:
The item that they work on should not randomly disappear
They should be the only one working on their item.
The first of these can most efficiently be solved by decoupling the storage from the map. So for each item, storage is allocated separately (either through the default allocator, or some pooling scheme if you add/remove items a lot), and the map only stores a shared ptr. Then each thread working on an item just needs to keep around a shared ptr to make sure that the storage will not disappear out from under them. Then acquiring the map's associated mutex/shared_mutex is only necessary for the duration of the fetch/store/remove of the pointers. This will then work okay so long as it is acceptable that some threads may waste some time doing actions on items already removed from the map. Using shared_ptrs will make sure you wont leak memory by using reference counters, and they will also do the locking/unlocking for these refcounts (or rather, try to use more efficient platform primitives for these). If you want to know more on shared_ptr, and smart pointers in general, this is a reasonable introduction to the c++ system of smart pointers.
That leaves the second problem, which is probably most easily resolved by keeping a mutex in the data struct (tickerDiary) itself, that threads acquire when starting to do operations that require predictable behavior from the struct, and can be released after they have done what they should do.
Separating the locking this way should reduce the amount of contention on the global lock for the map. However, you should probably benchmark your code to see whether that reduction is worth it given the extra costs of the allocations and refcounts for the individual items.
I don't think using std::vector is the right collection here. But if you insist on using it you should just have one mutex for each collection.
I would recommend concurrent_vector from INTEL TBB or a synchronized data structure from boost.
A third solution could be implementing your own concurrent vector

LInux/c++, how to protect two data structures at same time?

I'm developing some programs with c/c++ in Linux. my question is:
I have an upper-level class called Vault, inside of which there are an OrderMap which uses unordered_map as data structure and a OrderBook, which has 2 std::list in side.
Both the OrderMap and OrderBook stores Order* as the element, they share an Order object which is allocated on heap. So either the OrderBook or the OrderMap modifies the order within.
I have two threads that will do read and write operations on them. the 2 threads can insert/modify/retrieve(read)/delete the elements.
My Question is: how can I protect this big structure of "Vault"? I can actually protect the map or list, but I don't know how to protect them both at the same time.
anybody gives me some ideas?
Add a mutex and lock it before accessing any of the two.
Make them private, so you know accesses are made via your member functions (which have proper locks)
Consider using std::shared_ptr instead of Order *
I don't think I've ever seen a multi-thread usage of an order book. I really think you'd be better off using it in one thread only.
But, to get back to your question, I'll assume you're staying with 2 threads for whatever reasons.
This data structure is too complex to make it lock free. So these are the multi-thread options I can see:
1. If you use a single Vault instance for both threads you will have to lock around it. I assume you don't care to burn some cpu time, so I strongly suggest you use a spin lock, not a mutex.
2. If you can allow having 2 Vault instances this can improve things as every thread can keep it's own private instance and exchange modifications with the other thread using some other means: a lock free queue, or something else.
3. If your book is fast enough to copy, you can have a single central Vault pointer, make a copy on every update, or a group of updates, CAS onto that central pointer, and reuse the old one to not have to allocate every time. You'll end up with one spare instance per thread. Like this:
Vault* old_ptr = centralVault;
Vault* new_ptr = cachedVault ? cachedVault : new Vault;
do {
*new_ptr = *old_ptr;
makeChanges(new_ptr);
}
while( !cas(&centralVault, old_ptr, new_ptr) );
cachedVault = old_ptr;

Should I use different mutexes for different objects?

I am new to threading . Correct me if I am wrong that mutex locks the access to a shared data structure so that it cannot be used by other threads until it is unlocked . So, lets consider that there are 2 or more shared data structures . So , should I make different mutex objects for different data structures ? If no ,then how std::mutex will know which object it should lock ? What If I have to lock more than 1 objects at the same time ?
There are several points in your question that can be made more precise. Perhaps clearing this will solve things for you.
To begin with, a mutex, by itself, does not lock access to anything. It is basically something that your code can lock and unlock, and some "magic" ensures that only one thread can lock it at a time.
If, by convention, you decide that any code accessing some data structure foo will first begin by locking a mutex foo_mutex, then it will have the effect of protecting this data structure.
So, having said that, regarding your questions:
It depends on whether the two data structures need to be accessed together or not (e.g., can updating one without the other leave the system in an inconsistent state). If so, you should lock them with a single mutex. If not, you can improve parallelism by using two.
The mutex does not lock anything. It is you who decide by convention whether you can access 1, 2, or a million data structures, while holding it.
If you always needs to access both structures then it could be considered as a single resource so only a single lock is needed.
If you sometimes, even just once, need to access one of the structures independently then they can no longer be considered a single resource and you might need two locks. Of course, a single lock could still be sufficient, but then that lock would lock both resources at once, prohibiting other threads from accessing any of the structures.
Mutex does not "know" anything other than about itself. The lock is performed on mutex itself.
If there are two objects (or pieces of code) that need synchronized access (but can be accessed at the same time) then you have the liberty to use just one mutex for both or one for each. If you use one mutex they will not be accessed at the same time from two different threads.
If it cannot happen that access to one object is required while accessing the other object then you can use two mutexes, one for each. But if it can happen that one object must be accessed while the thread already holds another mutex then care must be taken that code never can reach a deadlock, where two threads hold one mutex each, and both at the same time wait that the other mutex is released.

Implementing thread-safe arrays

I want to implement a array-liked data structure allowing multiple threads to modify/insert items simultaneously. How can I obtain it in regard to performance? I implemented a wrapper class around std::vector and I used critical sections for synchronizing threads. Please have a look at my code below. Each time a thread want to work on the internal data, it may have to wait for other threads. Hence, I think its performance is NOT good. :( Is there any idea?
class parallelArray{
private:
std::vector<int> data;
zLock dataLock; // my predefined class for synchronizing
public:
void insert(int val){
dataLock.lock();
data.push_back(val);
dataLock.unlock();
}
void modify(unsigned int index, int newVal){
dataLock.lock();
data[index]=newVal; // assuming that the index is valid
dataLock.unlock();
}
};
Take a look at shared_mutex in the Boost library. This allows you to have multiple readers, but only one writer
http://www.boost.org/doc/libs/1_47_0/doc/html/thread/synchronization.html#thread.synchronization.mutex_types.shared_mutex
The best way is to use some fast reader-writer lock. You perform shared locking for read-only access and exclusive locking for writable access - this way read-only access is performed simultaneously.
In user-mode Win32 API there are Slim Reader/Writer (SRW) Locks available in Vista and later.
Before Vista you have to implement reader-writer lock functionality yourself that is pretty simple task. You can do it with one critical section, one event and one enum/int value. Though good implementation would require more effort - I would use hand-crafted linked list of local (stack allocated) structures to implement fair waiting queue.

How to synchronize access to many objects

I have a thread pool with some threads (e.g. as many as number of cores) that work on many objects, say thousands of objects. Normally I would give each object a mutex to protect access to its internals, lock it when I'm doing work, then release it. When two threads would try to access the same object, one of the threads has to wait.
Now I want to save some resources and be scalable, as there may be thousands of objects, and still only a hand full of threads. I'm thinking about a class design where the thread has some sort of mutex or lock object, and assigns the lock to the object when the object should be accessed. This would save resources, as I only have as much lock objects as I have threads.
Now comes the programming part, where I want to transfer this design into code, but don't know quite where to start. I'm programming in C++ and want to use Boost classes where possible, but self written classes that handle these special requirements are ok. How would I implement this?
My first idea was to have a boost::mutex object per thread, and each object has a boost::shared_ptr that initially is unset (or NULL). Now when I want to access the object, I lock it by creating a scoped_lock object and assign it to the shared_ptr. When the shared_ptr is already set, I wait on the present lock. This idea sounds like a heap full of race conditions, so I sort of abandoned it. Is there another way to accomplish this design? A completely different way?
Edit:
The above description is a bit abstract, so let me add a specific example. Imagine a virtual world with many objects (think > 100.000). Users moving in the world could move through the world and modify objects (e.g. shoot arrows at monsters). When only using one thread, I'm good with a work queue where modifications to objects are queued. I want a more scalable design, though. If 128 core processors are available, I want to use all 128, so use that number of threads, each with work queues. One solution would be to use spatial separation, e.g. use a lock for an area. This could reduce number of locks used, but I'm more interested if there's a design which saves as much locks as possible.
You could use a mutex pool instead of allocating one mutex per resource or one mutex per thread. As mutexes are requested, first check the object in question. If it already has a mutex tagged to it, block on that mutex. If not, assign a mutex to that object and signal it, taking the mutex out of the pool. Once the mutex is unsignaled, clear the slot and return the mutex to the pool.
Without knowing it, what you were looking for is Software Transactional Memory (STM).
STM systems manage with the needed locks internally to ensure the ACI properties (Atomic,Consistent,Isolated). This is a research activity. You can find a lot of STM libraries; in particular I'm working on Boost.STM (The library is not yet for beta test, and the documentation is not really up to date, but you can play with). There are also some compilers that are introducing TM in (as Intel, IBM, and SUN compilers). You can get the draft specification from here
The idea is to identify the critical regions as follows
transaction {
// transactional block
}
and let the STM system to manage with the needed locks as far as it ensures the ACI properties.
The Boost.STM approach let you write things like
int inc_and_ret(stm::object<int>& i) {
BOOST_STM_TRANSACTION {
return ++i;
} BOOST_STM_END_TRANSACTION
}
You can see the couple BOOST_STM_TRANSACTION/BOOST_STM_END_TRANSACTION as a way to determine a scoped implicit lock.
The cost of this pseudo transparency is of 4 meta-data bytes for each stm::object.
Even if this is far from your initial design I really think is what was behind your goal and initial design.
I doubt there's any clean way to accomplish your design. The problem that assigning the mutex to the object looks like it'll modify the contents of the object -- so you need a mutex to protect the object from several threads trying to assign mutexes to it at once, so to keep your first mutex assignment safe, you'd need another mutex to protect the first one.
Personally, I think what you're trying to cure probably isn't a problem in the first place. Before I spent much time on trying to fix it, I'd do a bit of testing to see what (if anything) you lose by simply including a Mutex in each object and being done with it. I doubt you'll need to go any further than that.
If you need to do more than that I'd think of having a thread-safe pool of objects, and anytime a thread wants to operate on an object, it has to obtain ownership from that pool. The call to obtain ownership would release any object currently owned by the requesting thread (to avoid deadlocks), and then give it ownership of the requested object (blocking if the object is currently owned by another thread). The object pool manager would probably operate in a thread by itself, automatically serializing all access to the pool management, so the pool management code could avoid having to lock access to the variables telling it who currently owns what object and such.
Personally, here's what I would do. You have a number of objects, all probably have a key of some sort, say names. So take the following list of people's names:
Bill Clinton
Bill Cosby
John Doe
Abraham Lincoln
Jon Stewart
So now you would create a number of lists: one per letter of the alphabet, say. Bill and Bill would go in one list, John, Jon Abraham all by themselves.
Each list would be assigned to a specific thread - access would have to go through that thread (you would have to marshall operations to an object onto that thread - a great use of functors). Then you only have two places to lock:
thread() {
loop {
scoped_lock lock(list.mutex);
list.objectAccess();
}
}
list_add() {
scoped_lock lock(list.mutex);
list.add(..);
}
Keep the locks to a minimum, and if you're still doing a lot of locking, you can optimise the number of iterations you perform on the objects in your lists from 1 to 5, to minimize the amount of time spent acquiring locks. If your data set grows or is keyed by number, you can do any amount of segregating data to keep the locking to a minimum.
It sounds to me like you need a work queue. If the lock on the work queue became a bottle neck you could switch it around so that each thread had its own work queue then some sort of scheduler would give the incoming object to the thread with the least amount of work to do. The next level up from that is work stealing where threads that have run out of work look at the work queues of other threads.(See Intel's thread building blocks library.)
If I follow you correctly ....
struct table_entry {
void * pObject; // substitute with your object
sem_t sem; // init to empty
int nPenders; // init to zero
};
struct table_entry * table;
object_lock (void * pObject) {
goto label; // yes it is an evil goto
do {
pEntry->nPenders++;
unlock (mutex);
sem_wait (sem);
label:
lock (mutex);
found = search (table, pObject, &pEntry);
} while (found);
add_object_to_table (table, pObject);
unlock (mutex);
}
object_unlock (void * pObject) {
lock (mutex);
pEntry = remove (table, pObject); // assuming it is in the table
if (nPenders != 0) {
nPenders--;
sem_post (pEntry->sem);
}
unlock (mutex);
}
The above should work, but it does have some potential drawbacks such as ...
A possible bottleneck in the search.
Thread starvation. There is no guarantee that any given thread will get out of the do-while loop in object_lock().
However, depending upon your setup, these potential draw-backs might not matter.
Hope this helps.
We here have an interest in a similar model. A solution we have considered is to have a global (or shared) lock but used in the following manner:
A flag that can be atomically set on the object. If you set the flag you then own the object.
You perform your action then reset the variable and signal (broadcast) a condition variable.
If the acquire failed you wait on the condition variable. When it is broadcast you check its state to see if it is available.
It does appear though that we need to lock the mutex each time we change the value of this variable. So there is a lot of locking and unlocking but you do not need to keep the lock for any long period.
With a "shared" lock you have one lock applying to multiple items. You would use some kind of "hash" function to determine which mutex/condition variable applies to this particular entry.
Answer the following question under the #JohnDibling's post.
did you implement this solution ? I've a similar problem and I would like to know how you solved to release the mutex back to the pool. I mean, how do you know, when you release the mutex, that it can be safely put back in queue if you do not know if another thread is holding it ?
by #LeonardoBernardini
I'm currently trying to solve the same kind of problem. My approach is create your own mutex struct (call it counterMutex) with a counter field and the real resource mutex field. So every time you try to lock the counterMutex, first you increment the counter then lock the underlying mutex. When you're done with it, you decrement the coutner and unlock the mutex, after that check the counter to see if it's zero which means no other thread is trying to acquire the lock . If so put the counterMutex back to the pool. Is there a race condition when manipulating the counter? you may ask. The answer is NO. Remember you have a global mutex to ensure that only one thread can access the coutnerMutex at one time.