In our application we deal with data that is processed in a worker thread and accessed in a display thread and we have a mutex that takes care of critical sections. Nothing special.
Now we thought about re-working our code where currently locking is done explicitely by the party holding and handling the data. We thought of a single entity that holds the data and only gives access to the data in a guarded fashion.
For this, we have a class called GuardedData. The caller can request such an object and should keep it only for a short time in local scope. As long as the object lives, it keeps the lock. As soon as the object is destroyed, the lock is released. The data access is coupled with the locking mechanism without any explicit extra work in the caller. The name of the class reminds the caller of the present guard.
template<typename T, typename Lockable>
class GuardedData {
GuardedData(T &d, Lockable &m) : data(d), guard(m) {}
boost::lock_guard<Lockable> guard;
T &data;
T &operator->() { return data; }
};
Again, a very simple concept. The operator-> mimics the semantics of STL iterators for access to the payload.
Now I wonder:
Is this approach well known?
Is there maybe a templated class like this already available, e.g. in the boost libraries?
I am asking because I think it is a fairly generic and usable concept. I could not find anything like it though.
Depending upon how this is used, you are almost guaranteed to end up with deadlocks at some point. If you want to operate on 2 pieces of data then you end up locking the mutex twice and deadlocking (unless each piece of data has its own mutex - which would also result in deadlock if the lock order is not consistent - you have no control over that with this scheme without making it really complicated). Unless you use a recursive mutex which may not be desired.
Also, how are your GuardedData objects passed around? boost::lock_guard is not copyable - it raises ownership issues on the mutex i.e. where & when it is released.
Its probably easier to copy parts of the data you need to the reader/writer threads as and when they need it, keeping the critical section short. The writer would similarly commit to the data model in one go.
Essentially your viewer thread gets a snapshot of the data it needs at a given time. This may even fit entirely in a cpu cache sitting near the core that is running the thread and never make it into RAM. The writer thread may modify the underlying data whilst the reader is dealing with it (but that should invalidate the view). However since the viewer has a copy it can continue on and provide a view of the data at the moment it was synchronized with the data.
The other option is to give the view a smart pointer to the data (which should be treated as immutable). If the writer wishes to modify the data, it copies it at that point, modifies the copy and when completes, switches the pointer to the data in the model. This would necessitate blocking all readers/writers whilst processing, unless there is only 1 writer. The next time the reader requests the data, it gets the fresh copy.
Well known, I'm not sure. However, I use a similar mechanism in Qt pretty often called a QMutexLocker. The distinction (a minor one, imho) is that you bind the data together with the mutex. A very similar mechanism to the one you've described is the norm for thread synchronization in C#.
Your approach is nice for guarding one data item at a time but gets cumbersome if you need to guard more than that. Additionally, it doesn't look like your design would stop me from creating this object in a shared place and accessing the data as often as I please, thinking that it's guarded perfectly fine, but in reality recursive access scenarios are not handled, nor are multi-threaded access scenarios if they occur in the same scope.
There seems to be to be a slight disconnect in the idea. Its use conveys to me that accessing the data is always made to be thread-safe because the data is guarded. Often, this isn't enough to ensure thread-safety. Order of operations on protected data often matters, so the locking is really scope-oriented, not data-oriented. You could get around this in your model by guarding a dummy object and wrapping your guard object in a temporary scope, but then why not just use one the existing mutex implementations?
Really, it's not a bad approach, but you need to make sure its intended use is understood.
Related
I'm writing a game engine (for fun), and have a lot of threads running concurrently. I have a class which holds an instance of another class as a private variable, which in turn holds and instance of a different class as a private variable. My question is, which one of these classes should I strive to make thread safe?
Do I make all of them thread safe, and have each of them protect their data with a mutex, do I make just one of them thread safe, and assume that anybody using my code must understand that if you are using underlying classes they aren't inherently thread safe.
Example:
class A {
private:
B b;
}
class B {
private:
C c;
}
class C {
// data
}
I understand I need every class's data to avoid being corrupted via a data race, however I would like to avoid throwing a ton of mutexes on every single method of every class. I'm not sure what the proper convention is.
You almost certainly don't want to try to make every class thread-safe, since doing so would end up being very inefficient (with lots of unnecessary locking and unlocking of mutexes for no benefit) and also prone to deadlocks (the more mutexes you have to lock at once, the more likely you are to have different threads locking sequences of mutexes in a different order, which is the entry condition for a deadlock and therefore your program freezing up on you).
What you want to do instead if figure out which data structures needs to be accessed by which thread(s). When designing your data structures, you want to try to design them in such a way that the amount of data shared between threads is as minimal as possible -- if you can reduce it to zero, then you don't need to do any serialization at all! (you probably won't manage that, but if you do a CSP/message-passing design you can get pretty close, in that the only mutexes you ever need to lock are the ones protecting your message-passing queues)
Keep in mind also that your mutexes are there not just to "protect the data" but also to allow a thread to make a series of changes appear to be atom from the viewpoint of the other threads that might access that data. That is, if your thread #1 needs to make changes to objects A, B, and C, and all three of those objects each have their own mutex, which thread #1 locks before modifying the object and then unlocks afterwards, you can still have a race condition, because thread #2 might "see" the update half-completed (i.e. thread #2 might examine the objects after you've updated A but before you've updated B and C). Therefore you usually need to push your mutexes up to a level where they cover all the objects you might need to change in one go -- in the ABC example case, that means you might want to have a single mutex that is used to serialize access to A, B, and C.
One way to approach it would be to start with just a single global mutex for your entire program -- anytime any thread needs to read or write any data structure that is accessible to other threads, that is the mutex it locks (and unlocks afterwards). That design probably won't be very efficient (since threads might spend a lot of time waiting for the mutex), but it will definitely not suffer from deadlock problems. Then once you have that working, you could look to see if that single mutex is actually a noticeable performance bottleneck for you -- if not, you're done, ship your program :) OTOH if it is a bottleneck, you can then analyze which of your data structures are logically independent from each other, and split your global mutex into two mutexes -- one to serialize access to subset A of the data structures, and another one to serialize access to subset B. (Note that the subsets don't need to be equal size -- subset B might contain just one particular data structure that is critical to performance) Repeat as necessary until either you're happy with performance, or your program starts to get too complicated or buggy (in which case you might want to dial the mutex-granularity back again a bit in order to regain your sanity).
I am wondering why we need both std::promise and std::future ? why c++11 standard divided get and set_value into two separate classes std::future and std::promise?
In the answer of this post, it mentioned that :
The reason it is separated into these two separate "interfaces" is to
hide the "write/set" functionality from the "consumer/reader".
I don't understand the benefit of hiding here. But isn't it simpler if we have only one class "future"? For example: promise.set_value can be replaced by future.set_value.
The problem that promise/future exist to solve is to shepherd a value from one thread to another. It may also transfer an exception instead.
So the source thread must have some object that it can talk to, in order to send the desired value to the other thread. Alright... who owns that object? If the source has a pointer to something that the destination thread owns, how does the source know if the destination thread has deleted the object? Maybe the destination thread no longer cares about the value; maybe something changed such that it decided to just drop your thread on the floor and forget about it.
That's entirely legitimate code in some cases.
So now the question becomes why the source doesn't own the promise and simply give the destination a pointer/reference to it? Well, there's a good reason for that: the promise is owned by the source thread. Once the source thread terminates, the promise will be destroyed. Thus leaving the destination thread with a reference to a destroyed promise.
Oops.
Therefore, the only viable solution is to have two full-fledged objects: one for the source and one for the destination. These objects share ownership of the value that gets transferred. Of course, that doesn't mean that they couldn't be the same type; you could have something like shared_ptr<promise> or somesuch. After all, promise/future must have some shared storage of some sort internally, correct?
However, consider the interface of promise/future as they currently stand.
promise is non-copyable. You can move it, but you can't copy it. future is also non-copyable, but a future can become a shared_future that is copyable. So you can have multiple destinations, but only one source.
promise can only set the value; it can't even get it back. future can only get the value; it cannot set it. Therefore, you have an asymmetric interface, which is entirely appropriate to this use case. You don't want the destination to be able to set the value and the source to be able to retrieve it. That's backwards code logic.
So that's why you want two objects. You have an asymmetric interface, and that's best handled with two related but separate types and objects.
I would think of a promise/future as an asynchronous queue (that's only intended to hold a single value).
The future is the read end of the queue. The promise is the write end of the queue.
The use of the two is normally distinct: the producer normally just writes to the "queue", and the consume just reads from it. Although, as you've noted, it's possible for a producer to read the value, there's rarely much reason for it to do that, so optimizing that particular operation is rarely seen as much of a priority.
In the usual scheme of things, the producer produces the value, and puts it into the promise. The consumer gets the value from the future. Each "client" uses one simple interface dedicated exclusively to one simple task, so it's easier to design and document the code, as well as ensuring that (for example) the consumer code doesn't mess with something related to producing the value (or vice versa). Yes, it's possible to do that, but enough extra work that it's fairly unlikely to happen by accident.
I am currently running into some design problems regarding concurrent programming in C++
and I was wondering if you could help me out:
Assume that some function func operates on some object obj. It is necessary during these operations to hold a lock (which might be a member variable of obj). Now assume that
func calls a subfunction func_2 while it holds the lock. Now func_2 operates on an object which is already locked. However, what if I also want to call func_2 from somewhere else without holding the lock? Should func_2 lock obj or should it not? I see 3 possibilites:
I could pass a bool to func_2 indicating whether ot not locking is required.
This seems to introduce a lot of boilerplate code though.
I could use a recursive lock and just always lock obj in func_2. Recursive locks
seem to
be problematic though, see here.
I could assume that every caller of func_2 holds the lock already. I would have
to document this and perhaps enforce this (at least in debugging mode). Is
it reasonable to have functions make assumptions regarding which locks are / are not
held by the calling thread? More generally, how do I decide from a design perspective
whether a function should lock Obj and which should assume that it is already locked?
(Obviously if a function assumes that certain locks are held then it can only call
functions which make at least equally strong assumptions but apart from that?)
My question is the following: Which one of these approaches is used in practice and why?
Thanks in advance
hfhc2
1. Passing an indicator whether to lock or not:
You give the the lock choice to the caller. This is error prone:
the caller might not do the right choice
the caller needs to know implementation details about your object, thus breaking the principle of encapsulation
the caller needs access to the mutex
If you have several objects, you eventually facilitate conditions for deadlocks
2. recursive lock:
You already highlighted the issue.
3. Pass locking responsibility to caller:
Among the different alternatives that you propose, this seems the most consistent. On contrary of 1, ou don't give the choice, but you pass complete responsibility for locking. It's part of the contract for using func_2.
You could even assert if a lock is set on the object, to prevent mistakes (although teh check wold be limited because you would not necessarily be in position to verivy who owns the lock).
4.Reconsider your design:
If you need to ensure in func_2 that the object is locked, it means that you have a critical section therein that you must protect. There are chances that both functions need to lock because they perform some lower level operations on obj and need to prevent data races on an instable state of the object.
I'd strongly advidse to look if it would be feasible to extract these lower-level routines from both func and func_2, and encapsulated them in simpler primitive functions on obj. This approach could also contribute to locking for shorter sequences thus increasing opportunity for real concurrency.
Ok, just as another follow-up. I recently read the API documentation of glib, in particular the section about message-passing queues. I found that most functions operating on these queues come in two variants, named function and function_unlocked. The idea is that if a programmer wants to execute a single operation, like popping from the queue this can be done using g_async_queue_pop(). The function automatically takes care of the locking/unlocking of the queue. However, if the programmer wants to for instance pop two elements without interruption, the following sequence may be used:
GAsyncQueue *queue = g_async_queue_new();
// ...
g_async_queue_lock(queue);
g_async_queue_pop_unlocked(queue);
g_async_queue_pop_unlocked(queue);
g_async_queue_unlock(queue);
This resembles my third approach. It is also the case that assumptions regarding the state of certain locks are made, they are required by the API and required to be documented.
What is the best way of performing the following in C++. Whilst my current method works I'm not sure it's the best way to go:
1) I have a master class that has some function in it
2) I have a thread that takes some instructions on a socket and then runs one of the functions in the master class
3) There are a number of threads that access various functions in the master class
I create the master class and then create instances of the thread classes from the master. The constructor for the thread class gets passed the "this" pointer for the master. I can then run functions from the master class inside the threads - i.e. I get a command to do something which runs a function in the master class from the thread. I have mutex's etc to prevent race problems.
Am I going about this the wrong way - It kinda seems like the thread classes should inherit the master class or another approach would be to not have separate thread classes but just have them as functions of the master class but that gets ugly.
Sounds good to me. In my servers, it is called 'SCB' - ServerControlBlock - and provides access to services like the IOCPbuffer/socket pools, logger, UI access for status/error messages and anything else that needs to be common to all the handler threads. Works fine and I don't see it as a hack.
I create the SCB, (and ensure in the ctor that all services accessed through it are started and ready for use), before creating the thread pool that uses the SCB - no nasty singletonny stuff.
Rgds,
Martin
Separate thread classes is pretty normal, especially if they have specific functionality. I wouldn't inherit from the main thread.
Passing the this pointer to threads is not, in itself, bad. What you do with it can be.
The this pointer is just like any other POD-ish data type. It's just a chunk of bits. The stuff that is in this might be more than PODs however, and passing what is in effect a pointer to it's members can be dangerous for all the usual reasons. Any time you share anything across threads, it introduces potential race conditions and deadlocks. The elementary means to resolve those conflicts is, of course, to introduce synchronization in the form of mutexes, semaphores, etc, but this can have the suprising effect of serializing your application.
Say you have one thread reading data from a socket and storing it to a synchronized command buffer, and another thread which reads from that command buffer. Both threads use the same mutex, which protects the buffer. All is well, right?
Well, maybe not. Your threads could become serialized if you're not very careful with how you lock the buffer. Presumably you created separate threads for the buffer-insert and buffer-remove codes so that they could run in parallel. But if you lock the buffer with each insert & each remove, then only one of those operations can be executing at a time. As long as your writing to the buffer, you can't read from it and vice versa.
You can try to fine-tune the locks so that they are as brief as possible, but so long as you have shared, synchronized data, you will have some degree of serialization.
Another approach is to hand data off to another thread explicitly, and remove as much data sharing as possible. Instead of writing to and reading from a buffer as in the above, for example, your socket code might create some kind of Command object on the heap (eg Command* cmd = new Command(...);) and pass that off to the other thread. (One way to do this in Windows is via the QueueUserAPC mechanism).
There are pros & cons to both approaches. The synchronization method has the benefit of being somewhat simpler to understand and implement at the surface, but the potential drawback of being much more difficult to debug if you mess something up. The hand-off method can make many of the problems inherent with synchronization impossible (thereby actually making it simpler), but it takes time to allocate memory on the heap.
I am new to multithreading world and started getting into it. I found threading requires synchronization. Volatile is no more a reliable thing. I would like to know if synchronization object are cacheable by compiler or at any stage?
Platform/languages used : c++, win32, Windows
In c++, volatile keyword is used for objects which can not be cached by CPUs. But today's compilers do not strictly follow this. Is there is other way around to make synchronization objects non-cacheable (or other optimizations are not applied on those objects).
tl;dr: Are synchronization objects cacheable? If yes, how can you make it non-cacheable ?
I'm not sure I follow your question: compiler cache has almost nothing to do with multithreading. The only thing that a compiler cache would do is to increase your compilation speed by caching previous compilations.
Synchronization objects can be "cached" since they're any arbitrary object that you've decided to use for synchronization, but that has little effect on concurrency. The only thing that you need to care about when synchronizing is that when you have multiple threads contending for a resource, they must all synchronize on the same object in order to get read/write access to the resource.
I'm going to take a wild guess, based on your mentioning of volatile, and assume that you're worried a synchronization object may be cached in a thread's local cache and changes to the synchronization object from one thread may not be visible to another thread. This, however, is a flawed idea:
When you call lock() or synchronize() (depending on the language), all you need to care about is that the lock is performed on the same object regardless of the internal state of the object.
Once you've acquired a lock on an object, any resource that you're modifying within that lock scope will be modified by only one thread.
Generally, you should use a synchronization object that will not change (ideally a readonly, const or final) and we're only talking about the reference here, not the content of the object itself. Here is an example:
object sync = new object();
string something = "hello":
void ModifySomething()
{
sync = new object();// <-- YOU SHOULD NEVER DO THIS!!
lock(sync)
{
something = GenerateRandomString();
}
}
Now notice that every time a thread calls ModifySomething, the synchronization object will be replaced by an new object and the threads will never synchronize on the same object, therefore there may be concurrent writes to something.
The question doesn't make much sense without specifying a run-time environment.
In the case of, Java, say, a synchronization object (an object used for synchronization) is just like any other object. The object is target of the synchronization, so volatile (which applies to member variables) is only needed if the variable containing the synchronization object can change. I would avoid a program design that needs such constructs.
Remember (again, in Java), it is the evaluation of an expression -- generally a variable access -- that results in the synchronization object to use. This evaluation is no different than any other in this aspect.
At the end of the day, however, it is just using the synchronization tools of a particular run-time/environment in a manner in which they are well-defined and well-behaving.
Happy coding.
Java, for instance, guarantees that synchronized(x) { foo }, where x is a particular object, will create a mutually exclusive critical region in which foo is executed. To do this it must do internal work to ensure the book-keeping data is flushed correctly across all processors/caches. However, these details are generally outside the scope of the run-time in terms of using the synchronization construct.
Synchronization objects are necessarily managed by the OS, which also manages threads and caches. Therefore, it's the OS responsibility to deal with caches. If it somehow knows that a synchronization object is used only on a single CPU (e.g. because it didn't allocate the second CPU to your process), the OS may very well decide to keep the synchronization object in the fist CPU's cache. If it needs to be shared across CPU's, then that will happen too.
One practical consequence is that you'll always initialize synchronization objects. In C++, that's natural (the constructor takes care of that) but in other languages you must explicitly do so. The OS has to keep track of the synchronization objects.