multithread function restart pattern/solution - c++

I have the following scenario:
One thread is iterating over a list data structure in method A of class X. That data structure is a cache. At any time, we can get a call to a method B in class X saying our cache is out of date. In that case, we need to restart function A if we are currently in function A, since the iteration over that data structure could us to find data that is no longer present. We can count on Method B not being called twice at the same time(Method A will have time to complete).
Is this possible? I am working with C++. Note that simply locking the cache is not enough. If we lock the cache and we get a call saying the cache is out of date, we need to right away restart function A for proper behaviour.
This wouldn't work correctly, but I will attempt to show what I need:
Class X
{
Method A
{
for each //data structure
{
// do processing
// check if our cache is out of date
if(mRestart)
{
while(!mReadyToStart)
; //wait
mRestart = false;
mReadyToStart = false;
//Break, and call something that will recall this function.
}
else
return result; //return means we never needed to restart.
}
}
Method B
{
mRestart = true;
//Do processing
mReadyToRestart = true;
}
bool mRestart; // Init to false in constructor
}

You need to use synchronization mechanisms to protect the mRestart, mReadyToRestart members and the data structures you're working on, from concurrent access.
Depending on your particular needs, OS and build environment you could either use c++11 standard mutexes or condition variables, or other low level OS methods, or frameworks (e.g. boost::thread) to realize this in c++.

Related

I don't understand how can optimistic concurrency be implemented in C++11

I'm trying to implement a protected variable that does not use locks in C++11. I have read a little about optimistic concurrency, but I can't understand how can it be implemented neither in C++ nor in any language.
The way I'm trying to implement the optimistic concurrency is by using a 'last modification id'. The process I'm doing is:
Take a copy of the last modification id.
Modify the protected value.
Compare the local copy of the modification id with the current one.
If the above comparison is true, commit the changes.
The problem I see is that, after comparing the 'last modification ids' (local copy and current one) and before commiting the changes, there is no way to assure that no other threads have modified the value of the protected variable.
Below there is a example of code. Lets suppose that are many threads executing that code and sharing the variable var.
/**
* This struct is pretended to implement a protected variable,
* but using optimistic concurrency instead of locks.
*/
struct ProtectedVariable final {
ProtectedVariable() : var(0), lastModificationId(0){ }
int getValue() const {
return var.load();
}
void setValue(int val) {
// This method is not atomic, other thread could change the value
// of val before being able to increment the 'last modification id'.
var.store(val);
lastModificationId.store(lastModificationId.load() + 1);
}
size_t getLastModificationId() const {
return lastModificationId.load();
}
private:
std::atomic<int> var;
std::atomic<size_t> lastModificationId;
};
ProtectedVariable var;
/**
* Suppose this method writes a value in some sort of database.
*/
int commitChanges(int val){
// Now, if nobody has changed the value of 'var', commit its value,
// retry the transaction otherwise.
if(var.getLastModificationId() == currModifId) {
// Here is one of the problems. After comparing the value of both Ids, other
// thread could modify the value of 'var', hence I would be
// performing the commit with a corrupted value.
var.setValue(val);
// Again, the same problem as above.
writeToDatabase(val);
// Return 'ok' in case of everything has gone ok.
return 0;
} else {
// If someone has changed the value of var while trying to
// calculating and commiting it, return error;
return -1;
}
}
/**
* This method is pretended to be atomic, but without using locks.
*/
void modifyVar(){
// Get the modification id for checking whether or not some
// thread has modified the value of 'var' after commiting it.
size_t currModifId = lastModificationId.load();
// Get a local copy of 'var'.
int currVal = var.getValue();
// Perform some operations basing on the current value of
// 'var'.
int newVal = currVal + 1 * 2 / 3;
if(commitChanges(newVal) != 0){
// If someone has changed the value of var while trying to
// calculating and commiting it, retry the transaction.
modifyVar();
}
}
I know that the above code is buggy, but I don't understand how to implement something like the above in a correct way, without bugs.
Optimistic concurrency doesn't mean that you don't use the locks, it merely means that you don't keep the locks during most of the operation.
The idea is that you split your modification into three parts:
Initialization, like getting the lastModificationId. This part may need locks, but not necessarily.
Actual computation. All expensive or blocking code goes here (including any disk writes or network code). The results are written in such a way that they not obscure previous version. The likely way it works is by storing the new values next to the old ones, indexed by not-yet-commited version.
Atomic commit. This part is locked, and must be short, simple, and non blocking. The likely way it works is that it just bumps the version number - after confirming, that there was no other version commited in the meantime. No database writes at this stage.
The main assumption here is that computation part is much more expensive that the commit part. If your modification is trivial and the computation cheap, then you can just use a lock, which is much simpler.
Some example code structured into these 3 parts could look like this:
struct Data {
...
}
...
std::mutex lock;
volatile const Data* value; // The protected data
volatile int current_value_version = 0;
...
bool modifyProtectedValue() {
// Initialize.
int version_on_entry = current_value_version;
// Compute the new value, using the current value.
// We don't have any lock here, so it's fine to make heavy
// computations or block on I/O.
Data* new_value = new Data;
compute_new_value(value, new_value);
// Commit or fail.
bool success;
lock.lock();
if (current_value_version == version_on_entry) {
value = new_value;
current_value_version++;
success = true;
} else {
success = false;
}
lock.unlock();
// Roll back in case of failure.
if (!success) {
delete new_value;
}
// Inform caller about success or failure.
return success;
}
// It's cleaner to keep retry logic separately.
bool retryModification(int retries = 5) {
for (int i = 0; i < retries; ++i) {
if (modifyProtectedValue()) {
return true;
}
}
return false;
}
This is a very basic approach, and especially the rollback is trivial. In real world example re-creating the whole Data object (or it's counterpart) would be likely infeasible, so the versioning would have to be done somewhere inside, and the rollback could be much more complex. But I hope it shows the general idea.
The key here is acquire-release semantics and test-and-increment. Acquire-release semantics are how you enforce an order of operations. Test-and-increment is how you choose which thread wins in case of a race.
Your problem therefore is the .store(lastModificationId+1). You'll need .fetch_add(1). It returns the old value. If that's not the expected value (from before your read), then you lost the race and retry.
If I understand your question, you mean to make sure var and lastModificationId are either both changed, or neither is.
Why not use std::atomic<T> where T would be structure that hold both the int and the size_t?
struct VarWithModificationId {
int var;
size_t lastModificationId;
};
class ProtectedVariable {
private std::atomic<VarWithModificationId> protectedVar;
// Add your public setter/getter methods here
// You should be guaranteed that if two threads access protectedVar, they'll each get a 'consistent' view of that variable, but the setter will need to use a lock
};
Оptimistic concurrency is used in database engines when it's expected that different users will access the same data rarely. It could go like this:
First user reads data and timestamp. Users handles the data for some time, user checks if the timestamp in the DB hasn't changes since he read the data, if it doesn't then user updates the data and the timestamp.
But, internally DB-engine uses locks for update anyway, during this lock it checks if timestamp has been changed and if it hasn't been, engine updates the data. Just time for which data is locked smaller than with pessimistic concurrency. And you also need to use some kind of locking.

Add and remove from a list in runtime

I have a simulation program. In the main class of the simulation I am "creating + adding" and "removing + destroying" Agents.
The problem is that once in a while (once every 3-4 times I run the program) the program crashes because I am apparently calling a function of an invalid agent in the main loop. The program works just fine most of the time. There are normally thousands of agents in the list.
I don't know how is it possible that I have invalid Agents in my Loop.
It is very difficult to debug the code because I receive the memory exception inside the "Agent::Step function" (which is too late because I cannot understand how was the invalid Agent in the list and got called).
When I look into the Agent reference inside the Agent::Step function (exception point) no data in the agent makes sense, not even the initialized data. So it is definitely invalid.
void World::step()
{
AddDemand();
// run over all the agents and check whether they have remaining actions
// Call their step function if they have, otherwise remove them from space and memory
list<Agent*>::iterator it = agents_.begin();
while (it != agents_.end())
{
if (!(*it)->AllIntentionsFinished())
{
(*it)->step();
it++;
}
else
{
(*it)->removeYourselfFromSpace(); //removes its reference from the space
delete (*it);
agents_.erase(it++);
}
}
}
void World::AddDemand()
{
int demand = demandIdentifier_.getDemandLevel(stepCounter_);
for (int i = 0; i < demand; i++)
{
Agent* tmp = new Agent(*this);
agents_.push_back(tmp);
}
}
Agent:
bool Agent::AllIntentionsFinished()
{
return this->allIntentionsFinished_; //bool flag will be true if all work is done
}
1- Is it possible that VStudio 2012 optimization of Loops (i.e. running in multi-thread if possible) creates the problem?
2- Any suggestions on debugging the code?
If you're running the code multi-threaded, then you'll need to add code to protect things like adding items to and removing items from the list. You can create a wrapper that adds thread safety for a container fairly easily -- have a mutex that you lock any time you do a potentially modifying operation on the underlying container.
template <class Container>
thread_safe {
Container c;
std::mutex m;
public:
void push_back(typename Container::value_type const &t) {
std::lock_guard l(m);
c.push_back(t);
}
// ...
};
A few other points:
You can almost certainly clean your code up quite a bit by having the list hold Agents directly, instead of a pointer to an Agent that you have to allocate dynamically.
Your Agent::RemoveYourselfFromSpace looks/sounds a lot like something that should be handled by Agent's destructor.
You can almost certainly do quite a bit more to clean up the code by using some standard algorithms.
For example, it looks to me like your step could be written something like this:
agents.remove_if([](Agent const &a) { return a.AllIntentionsFinished(); });
std::for_each(agents.begin(), agents.end(),
[](Agent &a) { a.step(); });
...or, you might prefer to continue using an explicit loop, but use something like:
for (Agent & a : agents)
a.step();
The problem is this:
agents_.erase(it++);
See Add and remove from a list in runtime
I don't see any thread-safe components in the code you showed, so if you are running multiple threads and sharing data between them, then absolutely you could have a threading issue. For instance, you do this:
(*it)->removeYourselfFromSpace(); //removes its reference from the space
delete (*it);
agents_.erase(it++);
This is the worst possible order for an unlocked list. You should: remove from the list, destruct object, delete object, in that order.
But if you are not specifically creating threads which share lists/agents, then threading is probably not your problem.

How to use C++11 <thread> designing a system which pulls data from sources

This question comes from:
C++11 thread doesn't work with virtual member function
As suggested in a comment, my question in previous post may not the right one to ask, so here is the original question:
I want to make a capturing system, which will query a few sources in a constant/dynamic frequency (varies by sources, say 10 times / sec), and pull data to each's queues. while the sources are not fixed, they may add/remove during run time.
and there is a monitor which pulls from queues at a constant freq and display the data.
So what is the best design pattern or structure for this problem.
I'm trying to make a list for all the sources pullers, and each puller holds a thread, and a specified pulling function (somehow the pulling function may interact with the puller, say if the source is drain, it will ask to stop the pulling process on that thread.)
Unless the operation where you query a source is blocking (or you have lots of them), you don't need to use threads for this. We could start with a Producer which will work with either synchronous or asynchronous (threaded) dispatch:
template <typename OutputType>
class Producer
{
std::list<OutputType> output;
protected:
int poll_interval; // seconds? milliseconds?
virtual OutputType query() = 0;
public:
virtual ~Producer();
int next_poll_interval() const { return poll_interval; }
void poll() { output.push_back(this->query()); }
std::size_t size() { return output.size(); }
// whatever accessors you need for the queue here:
// pop_front, swap entire list, etc.
};
Now we can derive from this Producer and just implement the query method in each subtype. You can set poll_interval in the constructor and leave it alone, or change it on every call to query. There's your general producer component, with no dependency on the dispatch mechanism.
template <typename OutputType>
class ThreadDispatcher
{
Producer<OutputType> *producer;
bool shutdown;
std::thread thread;
static void loop(ThreadDispatcher *self)
{
Producer<OutputType> *producer = self->producer;
while (!self->shutdown)
{
producer->poll();
// some mechanism to pass the produced values back to the owner
auto delay = // assume millis for sake of argument
std::chrono::milliseconds(producer->next_poll_interval());
std::this_thread::sleep_for(delay);
}
}
public:
explicit ThreadDispatcher(Producer<OutputType> *p)
: producer(p), shutdown(false), thread(loop, this)
{
}
~ThreadDispatcher()
{
shutdown = true;
thread.join();
}
// again, the accessors you need for reading produced values go here
// Producer::output isn't synchronised, so you can't expose it directly
// to the calling thread
};
This is a quick sketch of a simple dispatcher that would run your producer in a thread, polling it however often you ask it to. Note that passing produced values back to the owner isn't shown, because I don't know how you want to access them.
Also note I haven't synchronized access to the shutdown flag - it should probably be atomic, but it might be implicitly synchronized by whatever you choose to do with the produced values.
With this organization, it'd also be easy to write a synchronous dispatcher to query multiple producers in a single thread, for example from a select/poll loop, or using something like Boost.Asio and a deadline timer per producer.

Is it safe to modify data of pointer in vector from another thread?

Things seem to be working but I'm unsure if this is the best way to go about it.
Basically I have an object which does asynchronous retrieval of data. This object has a vector of pointers which are allocated and de-allocated on the main thread. Using boost functions a process results callback is bound with one of the pointers in this vector. When it fires it will be running on some arbitrary thread and modify the data of the pointer.
Now I have critical sections around the parts that are pushing into the vector and erasing in case the asynch retrieval object is receives more requests but I'm wondering if I need some kind of guard in the callback that is modifying the pointer data as well.
Hopefully this slimmed down pseudo code makes things more clear:
class CAsyncRetriever
{
// typedefs of boost functions
class DataObject
{
// methods and members
};
public:
// Start single asynch retrieve with completion callback
void Start(SomeArgs)
{
SetupRetrieve(SomeArgs);
LaunchRetrieves();
}
protected:
void SetupRetrieve(SomeArgs)
{
// ...
{ // scope for data lock
boost::lock_guard<boost::mutex> lock(m_dataMutex);
m_inProgress.push_back(SmartPtr<DataObject>(new DataObject)));
m_callback = boost::bind(&CAsyncRetriever::ProcessResults, this, _1, m_inProgress.back());
}
// ...
}
void ProcessResults(DataObject* data)
{
// CALLED ON ANOTHER THREAD ... IS THIS SAFE?
data->m_SomeMember.SomeMethod();
data->m_SomeOtherMember = SomeStuff;
}
void Cleanup()
{
// ...
{ // scope for data lock
boost::lock_guard<boost::mutex> lock(m_dataMutex);
while(!m_inProgress.empty() && m_inProgress.front()->IsComplete())
m_inProgress.erase(m_inProgress.begin());
}
// ...
}
private:
std::vector<SmartPtr<DataObject>> m_inProgress;
boost::mutex m_dataMutex;
// other members
};
Edit: This is the actual code for the ProccessResults callback (plus comments for your benefit)
void ProcessResults(CRetrieveResults* pRetrieveResults, CRetData* data)
{
// pRetrieveResults is delayed binding that server passes in when invoking callback in thread pool
// data is raw pointer to ref counted object in vector of main thread (the DataObject* in question)
// if there was an error set the code on the atomic int in object
data->m_nErrorCode.Store_Release(pRetrieveResults->GetErrorCode());
// generic iterator of results bindings for generic sotrage class item
TPackedDataIterator<GenItem::CBind> dataItr(&pRetrieveResults->m_DataIter);
// namespace function which will iterate results and initialize generic storage
GenericStorage::InitializeItems<GenItem>(&data->m_items, dataItr, pRetrieveResults->m_nTotalResultsFound); // this is potentially time consuming depending on the amount of results and amount of columns that were bound in storage class definition (i.e.about 8 seconds for a million equipment items in release)
// atomic uint32_t that is incremented when kicking off async retrieve
m_nStarted.Decrement(); // this one is done processing
// boost function completion callback bound to interface that requested results
data->m_complete(data->m_items);
}
As it stands, it appears that the Cleanup code can destroy an object for which a callback to ProcessResults is in flight. That's going to cause problems when you deref the pointer in the callback.
My suggestion would be that you extend the semantics of your m_dataMutex to encompass the callback, though if the callback is long-running, or can happen inline within SetupRetrieve (sometimes this does happen - though here you state the callback is on a different thread, in which case you are OK) then things are more complex. Currently m_dataMutex is a bit confused about whether it controls access to the vector, or its contents, or both. With its scope clarified, ProcessResults could then be enhanced to verify validity of the payload within the lock.
No, it isn't safe.
ProcessResults operates on the data structure passed to it through DataObject. It indicates that you have shared state between different threads, and if both threads operate on the data structure concurrently you might have some trouble coming your way.
Updating a pointer should be an atomic operation, but you can use InterlockedExchangePointer (in Windows) to be sure. Not sure what the Linux equivalent would be.
The only consideration then would be if one thread is using an obsolete pointer. Does the other thread delete the object pointed to by the original pointer? If so, you have a definite problem.

How do I use an arbitrary string as a lock in C++?

Let's say I have a multithreaded C++ program that handles requests in the form of a function call to handleRequest(string key). Each call to handleRequest occurs in a separate thread, and there are an arbitrarily large number of possible values for key.
I want the following behavior:
Simultaneous calls to handleRequest(key) are serialized when they have the same value for key.
Global serialization is minimized.
The body of handleRequest might look like this:
void handleRequest(string key) {
KeyLock lock(key);
// Handle the request.
}
Question: How would I implement KeyLock to get the required behavior?
A naive implementation might start off like this:
KeyLock::KeyLock(string key) {
global_lock->Lock();
internal_lock_ = global_key_map[key];
if (internal_lock_ == NULL) {
internal_lock_ = new Lock();
global_key_map[key] = internal_lock_;
}
global_lock->Unlock();
internal_lock_->Lock();
}
KeyLock::~KeyLock() {
internal_lock_->Unlock();
// Remove internal_lock_ from global_key_map iff no other threads are waiting for it.
}
...but that requires a global lock at the beginning and end of each request, and the creation of a separate Lock object for each request. If contention is high between calls to handleRequest, that might not be a problem, but it could impose a lot of overhead if contention is low.
You could do something similar to what you have in your question, but instead of a single global_key_map have several (probably in an array or vector) - which one is used is determined by some simple hash function on the string.
That way instead of a single global lock, you spread that out over several independent ones.
This is a pattern that is often used in memory allocators (I don't know if the pattern has a name - it should). When a request comes in, something determines which pool the allocation will come from (usually the size of the request, but other parameters can factor in as well), then only that pool needs to be locked. If an allocation request comes in from another thread that will use a different pool, there's no lock contention.
It will depend on the platform, but the two techniques that I'd try would be:
Use named mutex/synchronization
objects, where object name = Key
Use filesystem-based locking, where you
try to create a non-shareable
temporary file with the key name. If it exists already (=already
locked) this will fail and you'll
have to poll to retry
Both techniques will depend on the detail of your OS. Experiment and see which works.
.
Perhaps an std::map<std::string, MutexType> would be what you want, where MutexType is the type of the mutex you want. You will probably have to wrap accesses to the map in another mutex in order to ensure that no other thread is inserting at the same time (and remember to perform the check again after the mutex is locked to ensure that another thread didn't add the key while waiting on the mutex!).
The same principle could apply to any other synchronization method, such as a critical section.
Raise granularity and lock entire key-ranges
This is a variation on Mike B's answer, where instead of having several fluid lock maps you have a single fixed array of locks that apply to key-ranges instead of single keys.
Simplified example: create array of 256 locks at startup, then use first byte of key to determine index of lock to be acquired (i.e. all keys starting with 'k' will be guarded by locks[107]).
To sustain optimal throughput you should analyze distribution of keys and contention rate. The benefits of this approach are zero dynamic allocations and simple cleanup; you also avoid two-step locking. The downside is potential contention peaks if key distribution becomes skewed over time.
After thinking about it, another approach might go something like this:
In handleRequest, create a Callback that does the actual work.
Create a multimap<string, Callback*> global_key_map, protected by a mutex.
If a thread sees that key is already being processed, it adds its Callback* to the global_key_map and returns.
Otherwise, it calls its callback immediately, and then calls the callbacks that have shown up in the meantime for the same key.
Implemented something like this:
LockAndCall(string key, Callback* callback) {
global_lock.Lock();
if (global_key_map.contains(key)) {
iterator iter = global_key_map.insert(key, callback);
while (true) {
global_lock.Unlock();
iter->second->Call();
global_lock.Lock();
global_key_map.erase(iter);
iter = global_key_map.find(key);
if (iter == global_key_map.end()) {
global_lock.Unlock();
return;
}
}
} else {
global_key_map.insert(key, callback);
global_lock.Unlock();
}
}
This has the advantage of freeing up threads that would otherwise be waiting for a key lock, but apart from that it's pretty much the same as the naive solution I posted in the question.
It could be combined with the answers given by Mike B and Constantin, though.
/**
* StringLock class for string based locking mechanism
* e.g. usage
* StringLock strLock;
* strLock.Lock("row1");
* strLock.UnLock("row1");
*/
class StringLock {
public:
/**
* Constructor
* Initializes the mutexes
*/
StringLock() {
pthread_mutex_init(&mtxGlobal, NULL);
}
/**
* Lock Function
* The thread will return immediately if the string is not locked
* The thread will wait if the string is locked until it gets a turn
* #param string the string to lock
*/
void Lock(string lockString) {
pthread_mutex_lock(&mtxGlobal);
TListIds *listId = NULL;
TWaiter *wtr = new TWaiter;
wtr->evPtr = NULL;
wtr->threadId = pthread_self();
if (lockMap.find(lockString) == lockMap.end()) {
listId = new TListIds();
listId->insert(listId->end(), wtr);
lockMap[lockString] = listId;
pthread_mutex_unlock(&mtxGlobal);
} else {
wtr->evPtr = new Event(false);
listId = lockMap[lockString];
listId->insert(listId->end(), wtr);
pthread_mutex_unlock(&mtxGlobal);
wtr->evPtr->Wait();
}
}
/**
* UnLock Function
* #param string the string to unlock
*/
void UnLock(string lockString) {
pthread_mutex_lock(&mtxGlobal);
TListIds *listID = NULL;
if (lockMap.find(lockString) != lockMap.end()) {
lockMap[lockString]->pop_front();
listID = lockMap[lockString];
if (!(listID->empty())) {
TWaiter *wtr = listID->front();
Event *thdEvent = wtr->evPtr;
thdEvent->Signal();
} else {
lockMap.erase(lockString);
delete listID;
}
}
pthread_mutex_unlock(&mtxGlobal);
}
protected:
struct TWaiter {
Event *evPtr;
long threadId;
};
StringLock(StringLock &);
void operator=(StringLock&);
typedef list TListIds;
typedef map TMapLockHolders;
typedef map TMapLockWaiters;
private:
pthread_mutex_t mtxGlobal;
TMapLockWaiters lockMap;
};