I have an std::map < std::string, std::string > which is having values added to it at irregular intervals from one thread (but frequently and needs to be very fast), and occasionally having groups of entries removed.
I need from a different thread to dump a snapshot of the map as text to a debug log on command from a user.
Clearly it's not thread safe to just iterate through the map outputting the debug information while it could be updated so I'm currently taking a read lock (mutex) before dumping the data and a write lock for every insert or delete. This works fine, but I can't really lock the map for this long, it delays the processing of incoming updates too much.
I don't believe I can lock and unlock the debug dump thread for each item as modifying the map from the other thread can invalidate the iterator I believe.
Is there any way I can do this safely without having to take out a read lock on the whole data structure while I write it out so that new values can still be inserted quickly? I realise I won't be able to get a guarenteed consistent view of the data if values can be added and removed while I'm iterating though it, but as long as it's safe that's understood.
If there is no way to use a map for this, can anyone suggest any other data structure I could use?
edit: I'm hoping for a solution that means I don't need to take out an expensive lock when adding an item.
There are 2 solutions I can see at this moment:
(Easy, but might still take too long): copy the map (or assign to another container) while locked and then dump the local copy to the debug log while not locked
(Some more work): delegate the updates of the map to another thread via a queue. If the other thread is the one that dumps to the debug log, then you don't need the locks anymore. This way the fast threads are only locked while accessing the queue.
Related
I have a situation where I have a legacy multi-threaded application I'm trying to move to a linux platform and convert into C++.
I have a fixed size array of integers:
int R[5000];
And I perform a lot of operations like:
R[5] = (R[10] + R[20]) / 50;
R[5]++;
I have one Foreground task that mostly reads the values....but on occasion can update one. And then I have a background worker that is updating the values constantly.
I need to make this structure thread safe.
I would rather only update the value if the value has actually changed. The worker is constantly collecting data and doing calculation and storing the data whether it changes or not.
So should I create a custom class MyInt which has the structure and then include an array of mutexes to lock for updating/reading each value and then overload the [], =, ++, +=, -=, etc? Or should I try to implement anatomic integer array?
Any suggestions as to what that would look like? I'd like to try and keep the above notation for doing the updates...but I get that it might not be possible.
Thanks,
WB
The first thing to do is make the program work reliably, and the easiest way to do that is to have a Mutex that is used to control access to the entire array. That is, whenever either thread needs to read or write to anything in the array, it should do:
the_mutex.lock();
// do all the array-reads, calculations, and array-writes it needs to do
the_mutex.unlock();
... then test your program and see if it still runs fast enough for your needs. If so, you're done; that's all you need to do.
If you find that the program isn't fast enough due to contention on the mutex, you can start trying optimizations to make things faster. For example, if you know that your threads' operations will only need to work on local segments of the array at one time, you could create multiple mutexes, and assign different subsets of the array to each mutex (e.g. mutex #1 is used to serialize access to the first 100 array items, mutex #2 for the second 100 array items, etc). That will greatly decrease the chances of one thread having to wait for the other thread to release a mutex before it can continue.
If things still aren't fast enough for you, you could then look in to having two different arrays, one for each thread, and occasionally copying from one array to the other. That way each thread could safely access its own private array without any serialization needed. The copying operation would need to be handled carefully, probably using some sort of inter-thread message-passing protocol.
Enviroment : Windows 7.0 , C++ , Multithreading
I have created a new worker thread to receive data on socket and add it into a static multimap instance.
code snippet:
//remember mymultimap is static data type
static std::multimap<string,string> mymultimap;
EnterCriticalSection(&m_criticalsection);
mymultimap.insert ( "aaa", "bbb") );
LeaveCriticalSection(&m_criticalsection);
At same time my main thread is reading same static multimap :
code snap :
EnterCriticalSection(&m_criticalsection);
std::multimap<string,string>::iterator it = mymultimap.begin();
for( ; it != mymultimap.end(); it++)
{
std::string firstName = (*it).first;
std::string secondName = (*it).second;
}
LeaveCriticalSection(&m_criticalsection);
As the main and work threads are continously doing read and write, it hampers my application peformance.
Also the instance of multimap contains huge data (more than 10,000 records).
How can I make thread lock for a minimal time in multimap?
EnterCriticalSection(&m_criticalsection);
///minimal lock time for Map ???
LeaveCriticalSection(&m_criticalsection);
Please help me in improve my application performance.
As it stands your question leaves too much room for discussion: we don't know how the values stored in your multimap are actually used.
If:
the order enforced in that data structure is important,
you need to keep the values in the multimap even after they have been read,
you need to go through all the entries each time you read,
then you are pretty much stuck as to how we can optimize the use of that structure.
On the other hand, if you can relax one of these requirements somehow, then you may have possibilities to optimize things a bit, for instance by using a message queue instead of the map directly for communication between both threads.
Message queues are a standard way to implement efficient communication between threads, and for one to one setup, there are even lockless solutions.
Update: thinking about it, sharing that type of structure accross threads is not a good idea, whatever use you make of it. It is better to regroup all accesses to a multimap within one single thread, and thus have items generated by other threads passed on to the thread managing it through a queue. This completely decouples the work of generating the items from their storage and use. In your case, the producer thread will lose less time storing the data, which leaves it more time to handle the socket stream.
So, for that solution, you need a queue<std::pair<key,value> >, say std::queue, to be handled to both threads at their initialization, or alternatively a static instance like the multimap one. Then simply replace the multimap::insert in the first thread by a queue::push_back of a make_pair(key, value), and symmetrically in the consumer thread, fisrt have a pop_front of all the pending pairs in the queue, inserting them in the map at the same time, then implement your processing of your map, whatever it is.
Note:
Please be aware that if you are using a multimap, you might end up with multiple values for the same key: the call to find will return an iterator, and you might well have to check the next entries of the multimap to make sure you get all the values with the same keys .
I'm writing a threaded application that will process a list of resources and may or may not place a resulting item in a container (std::map) for each resource.
The processing of resources takes place in multiple threads.
The result container will be traversed and each item acted upon by a seperate thread which takes an item and updates a MySQL database (using mysqlcppconn API), then removes the item from the container and continues.
For simplicities sake, here's the overview of the logic:
queueWorker() - thread
getResourcesList() - seeds the global queue
databaseWorker() - thread
commitProcessedResources() - commits results to a database every n seconds
processResources() - thread x <# of processor cores>
processResource()
queueResultItem()
And the pseudo-implementation to show what I'm doing.
/* not the actual stucts, but just for simplicities sake */
struct queue_item_t {
int id;
string hash;
string text;
};
struct result_item_t {
string hash; // hexadecimal sha1 digest
int state;
}
std::map< string, queue_item_t > queue;
std::map< string, result_item_t > results;
bool processResource (queue_item_t *item)
{
result_item_t result;
if (some_stuff_that_doesnt_apply_to_all_resources)
{
result.hash = item->hash;
result.state = 1;
/* PROBLEM IS HERE */
queueResultItem(result);
}
}
void commitProcessedResources ()
{
pthread_mutex_lock(&resultQueueMutex);
// this can take a while since there
for (std::map< string, result_item_t >::iterator it = results.begin; it != results.end();)
{
// do mysql stuff that takes a while
results.erase(it++);
}
pthread_mutex_unlock(&resultQueueMutex);
}
void queueResultItem (result_item_t result)
{
pthread_mutex_lock(&resultQueueMutex);
results.insert(make_pair(result.hash, result));
pthread_mutex_unlock(&resultQueueMutex);
}
As indicated in processResource(), the problem is there and is that when commitProcessedResources() is running and resultQueueMutex is locked, we'll wait here for queueResultItem() to return since it'll try to lock the same mutex and therefore will wait until it's done, which might take a while.
Since there, obviously, is a limited number of threads running, as soon as all of them are waiting for queueResultItem() to finish, no more work will be done until the mutex is released and usable for queueResultItem().
So, my question is how I best go about implementing this? Is there a specific kind of standard container that can be inserted into and deleted from simultaneously or does there exist something that I just don't know of?
It is not strictly necessary that each queue item can have it's own unique key as is the case here with the std::map, but I would prefer it since several resources can produce the same result and I would prefer to only send a unique result to the database even if it does use INSERT IGNORE to ignore any duplicates.
I'm fairly new to C++ so I've no idea what to look for on Google, unfortunately. :(
You do not have to hold the lock for the queue all the time during processing in commitProcessedResources (). You can instead swap the queue with empty one:
void commitProcessedResources ()
{
std::map< string, result_item_t > queue2;
pthread_mutex_lock(&resultQueueMutex);
// XXX Do a quick swap.
queue2.swap (results);
pthread_mutex_unlock(&resultQueueMutex);
// this can take a while since there
for (std::map< string, result_item_t >::iterator it = queue2.begin();
it != queue2.end();)
{
// do mysql stuff that takes a while
// XXX You do not need this.
//results.erase(it++);
}
}
You will need to use synchronization methods (i.e. the mutex) to make this work properly. However, the goal of parallel programming is to minimize the critical section (i.e. the amount of code which is executed while you hold the lock).
That said, if your MySQL queries can be run in parallel without synchronization (i.e. multiple calls won't conflict with each other), take them out of the critical section. This will greatly reduce overhead. For instance, a simple refactor as follow could do the trick
void commitProcessedResources ()
{
// MOVING THIS LOCK
// this can take a while since there
pthread_mutex_lock(&resultQueueMutex);
std::map<string, result_item_t>::iterator end = results.end();
std::map<string, result_item_t>::iterator begin = results.begin();
pthread_mutex_unlock(&resultQueueMutex);
for (std::map< string, result_item_t >::iterator it = begin; it != end;)
{
// do mysql stuff that takes a while
pthread_mutex_lock(&resultQueueMutex); // Is this the only place we need it?
// This is a MUCH smaller critical section
results.erase(it++);
pthread_mutex_unlock(&resultQueueMutex); // Unlock or everything will block until end of loop
}
// MOVED UNLOCK
}
This will give you concurrent "real-time" access to the data across multiple threads. That is, as every write finishes, the map is updated and can be read elsewhere with current information.
Up through C++03, the standard didn't define anything about threading or thread safety at all (and since you're using pthreads, I'm guess that's pretty much what you're using).
As such, it's up to you to do locking on your shared map, to ensure that only one thread tries to access the map at any given time. Without that, you're likely to corrupt its internal data structure, so the map is no longer valid at all.
Alternatively (and I'd generally prefer this) you could have your multiple thread just put their data into a thread-safe queue, and have a single thread that gets data from that queue and puts it into the map. Since it's single-threaded, you no longer have to lock the map when its in use.
There are a few reasonable possibilities for dealing with the delay while you flush the map to the disk. Probably the simplest is to have the same thread read from the queue, insert into the map, and periodically flush the map to disk. In this case, the incoming data just sits in the queue while the map is being flushed to disk. This keeps access to the map simple -- since only one thread ever touches it directly, it can use the map without any locking.
Another would be to have two maps. At any given time, the thread that flushes to disk gets one map, and the thread the retrieves from the queue and inserts into the map gets the other. When the flushing thread needs to do its thing, it just swaps the roles of the two. Personally, I think I prefer the first though -- eliminating all the locking around the map has a great deal of appeal, at least to me.
Yet another variant that would maintain that simplicity would be for the queue->map thread to create map, fill it, and when it's full enough (i.e., after the appropriate length of time) stuff it into another queue, then repeat from the start (i.e., create new map, etc.) The flushing thread retrieves a map from its incoming queue, flushes it to disk, and destroys it. Though this adds a bit of overhead creating and destroying maps, you're not doing it often enough to care a lot. You still keep single-threaded access to any map at any time, and still keep all the database access segregated from everything else.
please consider the following:
I have a queue of objects represented
as an array.
I process them off the top of the
array (at position 1) before calling
arrayDeleteAt() to remove it from
the array.
I add new queue item at the top of
the array using arrayAppend().
This works fine. However, I now wish to re-order the array immediately after adding an item.
I am concerned that if a thread is taking from the queue it will find the queue order has changed between it taking the item at position 1 and it deleting the item at position 1 - because in that time an additional item has been added the the queue has been re-sorted. So I need to ensure my queue is thread-safe.
Is there any way to doing this using the cflock tag? Since my add and remove code are in different places in the code the thread executing one bit of code would need to know that a thread is executing another specific bit of code and halt until that other thread has stopped executing it's code.
Or am I better off just raising a flag while the sorting is going on and preventing anything being taken from the array while the sort is in progress?
All this is happening in the APPLICATION scope on a CF 8 Enterprise server.
Thanks in advance for any help.
Ciaran
An exclusive CFLOCK should do what you want. You could just scope-lock APPLICATION, but that might be overly broad. Probably best to do it as a named lock. It won't matter where the different bits of code with the lock are located, as long as they're all using the same name.
After posting my solution to my own problem regarding memory issues, nusi suggested that my solution lacks locking.
The following pseudo code vaguely represents my solution in a very simple way.
std::map<int, MyType1> myMap;
void firstFunctionRunFromThread1()
{
MyType1 mt1;
mt1.Test = "Test 1";
myMap[0] = mt1;
}
void onlyFunctionRunFromThread2()
{
MyType1 &mt1 = myMap[0];
std::cout << mt1.Test << endl; // Prints "Test 1"
mt1.Test = "Test 2";
}
void secondFunctionFromThread1()
{
MyType1 mt1 = myMap[0];
std::cout << mt1.Test << endl; // Prints "Test 2"
}
I'm not sure at all how to go about implementing locking, and I'm not even sure why I should do it (note the actual solution is much more complex). Could someone please explain how and why I should implement locking in this scenario?
One function (i.e. thread) modifies the map, two read it. Therefore a read could be interrupted by a write or vice versa, in both cases the map will probably be corrupted. You need locks.
Actually, it's not even just locking that is the issue...
If you really want thread two to ALWAYS print "Test 1", then you need a condition variable.
The reason is that there is a race condition. Regardless of whether or not you create thread 1 before thread 2, it is possible that thread 2's code can execute before thread 1, and so the map will not be initialized properly. To ensure that no one reads from the map until it has been initialized you need to use a condition variable that thread 1 modifies.
You also should use a lock with the map, as others have mentioned, because you want threads to access the map as though they are the only ones using it, and the map needs to be in a consistent state.
Here is a conceptual example to help you think about it:
Suppose you have a linked list that 2 threads are accessing. In thread 1, you ask to remove the first element from the list (at the head of the list), In thread 2, you try to read the second element of the list.
Suppose that the delete method is implemented in the following way: make a temporary ptr to point at the second element in the list, make the head point at null, then make the head the temporary ptr...
What if the following sequence of events occur:
-T1 removes the heads next ptr to the second element
- T2 tries to read the second element, BUT there is no second element because the head's next ptr was modified
-T1 completes removing the head and sets the 2nd element as the head
The read by T2 failed because T1 didn't use a lock to make the delete from the linked list atomic!
That is a contrived example, and isn't necessarily how you would even implement the delete operation; however, it shows why locking is necessary: it is necessary so that operations performed on data are atomic. You do not want other threads using something that is in an inconsistent state.
Hope this helps.
In general, threads might be running on different CPUs/cores, with different memory caches. They might be running on the same core, with one interrupting ("pre-empting" the other). This has two consequences:
1) You have no way of knowing whether one thread will be interrupted by another in the middle of doing something. So in your example, there's no way to be sure that thread1 won't try to read the string value before thread2 has written it, or even that when thread1 reads it, it is in a "consistent state". If it is not in a consistent state, then using it might do anything.
2) When you write to memory in one thread, there is no telling if or when code running in another thread will see that change. The change might sit in the cache of the writer thread and not get flushed to main memory. It might get flushed to main memory but not make it into the cache of the reader thread. Part of the change might make it through, and part of it not.
In general, without locks (or other synchronization mechanisms such as semaphores) you have no way of saying whether something that happens in thread A will occur "before" or "after" something that happens in thread B. You also have no way of saying whether or when changes made in thread A will be "visible" in thread B.
Correct use of locking ensures that all changes are flushed through the caches, so that code sees memory in the state you think it should see. It also allows you to control whether particular bits of code can run simultaneously and/or interrupt each other.
In this case, looking at your code above, the minimum locking you need is to have a synchronisation primitive which is released/posted by the second thread (the writer) after it has written the string, and acquired/waited on by the first thread (the reader) before using that string. This would then guarantee that the first thread sees any changes made by the second thread.
That's assuming the second thread isn't started until after firstFunctionRunFromThread1 has been called. If that might not be the case, then you need the same deal with thread1 writing and thread2 reading.
The simplest way to actually do this is to have a mutex which "protects" your data. You decide what data you're protecting, and any code which reads or writes the data must be holding the mutex while it does so. So first you lock, then read and/or write the data, then unlock. This ensures consistent state, but on its own it does not ensure that thread2 will get a chance to do anything at all in between thread1's two different functions.
Any kind of message-passing mechanism will also include the necessary memory barriers, so if you send a message from the writer thread to the reader thread, meaning "I've finished writing, you can read now", then that will be true.
There can be more efficient ways of doing certain things, if those prove too slow.
The whole idea is to prevent the program from going into an indeterminate/unsafe state due to multiple threads accessing the same resource(s) and/or updating/modifying the resource so that the subsequent state becomes undefined. Read up on Mutexes and Locking (with examples).
The set of instructions created as a result of compiling your code can be interleaved in any order. This can yield unpredictable and undesired results. For example, if thread1 runs before thread2 is selected to run, your output may look like:
Test 1
Test 1
Worse yet, one thread may get pre-empted in the middle of assigning - if assignment is not an atomic operation. In this case let's think of atomic as the smallest unit of work which can not be further split.
In order to create a logically atomic set of instructions -- even if they yield multiple machine code instructions in reality -- is to use a lock or mutex. Mutex stands for "mutual exclusion" because that's exactly what it does. It ensures exclusive access to certain objects or critical sections of code.
One of the major challenges in dealing with multiprogramming is identifying critical sections. In this case, you have two critical sections: where you assign to myMap, and where you change myMap[ 0 ]. Since you don't want to read myMap before writing to it, that is also a critical section.
The simplest answer is: you have to lock whenever there is an access to some shared resources, which are not atomics. In your case myMap is shared resource, so you have to lock all reading and writing operations on it.