What is the difference between event objects and condition variables?
I am asking in context of WIN32 API.
Event objects are kernel-level objects. They can be shared across process boundaries, and are supported on all Windows OS versions. They can be used as their own standalone locks to shared resources, if desired. Since they are kernel objects, the OS has limitations on the number of available events that can be allocated at a time.
Condition Variables are user-level objects. They cannot be shared across process boundaries, and are only supported on Vista/2008 and later. They do not act as their own locks, but require a separate lock to be associated with them, such as a critical section. Since they are user- objects, the number of available variables is limited by available memory. When a Conditional Variable is put to sleep, it automatically releases the specified lock object so another thread can acquire it. When the Conditional Variable wakes up, it automatically re-acquires the specified lock object again.
In terms of functionality, think of a Conditional Variable as a logical combination of two objects working together - a keyed event and a lock object. When the Condition Variable is put to sleep, it resets the event, releases the lock, waits for the event to be signaled, and then re-acquires the lock. For instance, if you use a critical section as the lock object, SleepConditionalVariableCS() is similar to a sequence of calls to ResetEvent(), LeaveCriticalSection(), WaitForSingleObject(), and EnterCriticalSection(). Whereas if you use a SRWL as the lock, SleepConditionVariableSRW() is similar to a sequence of calls to ResetEvent(), ReleaseSRWLock...(), WaitForSingleObject(), and AcquireSRWLock...().
They are very similar, but event objects work across process boundaries, whereas condition variables do not. From the MSDN documentation on condition variables:
Condition variables are user-mode
objects that cannot be shared across
processes.
From the MSDN documentation on event objects:
Threads in other processes can open a
handle to an existing event object by
specifying its name in a call to the
OpenEvent function.
The most significant difference is the Event object is a kernel object and can be shared across processes as long as it is alive when processes/threads are trying to acquire, on the contrary, Condition variable is a user mode object which is light(only has same size as a pointer and has nothing additional to be released after using it) and has better performance.
Typically, condition variable is often used along with locks, since we need to keep data synchronized properly. When considering Condition Variable, we treat it like keyed events which was improved since Vista.
Joe duffy has a blog post http://joeduffyblog.com/2006/11/28/windows-keyed-events-critical-sections-and-new-vista-synchronization-features/ that explained more detailed information.
Related
I'm trying to implement some sort of waiting on many CONDITION_VARIABLE.
The answers here imply that WaitForMultipleObjects and such are valid options when dealing with Windows API (and many more places over the internet), but it appears that it is not the case.
first of all, nowhere in the MSDN documentation it is written that a Windows Condition variable is a valid argument for WaitFor... functions.
Second of all, it appears that WaitFor... only accepts HANDLE type as argument, which is basically a kernel object. but PCONDITION_VARIABLE is not really a HANDLE.
finally, trying to use a condition variable (both as a PCONDITION_VARIABLE and the undocumented CONDITION_VARIABLE::Ptr) makes the functions return error code 6 (invalid handle)
for example:
CONDITION_VARIABLE cv;
InitializeConditionVariable(&cv);
auto res = WaitForSingleObject(cv.Ptr, INFINITE); //returns immediately
if (res != WAIT_OBJECT_0) {
auto ec = GetLastError();
std::cout << ec << "\n";
}
so, can you really wait on a condition variable or it's just an urban legend?
I don't think so and it doesn't make any sense.
First of all, the WaitForXxx functions operate (mostly) on dispatcher objects - a subset of kernel objects including timers, events, mutexes, sempahores, threads and process (and a few internal object types like KAGTEs and KQUEUEs, but not access tokens or file mapping objects) that have a DISPATCHER_HEADER. It certainly won't work on user mode constructs that the kernel is unaware of.
Second, note that when you sleep ("wait") on a condition variable you have to specify whether this is critical section-based condition variable or a SRWL-based condition variable by using the correct function - either SleepConditionVariableCS or SleepConditionVariableSRW. So again, Windows (not only the kernel) has no idea what kind of condition variable you're passing it, but it needs this information to operate correctly. Since you don't provide this information to WaitForXxx it follows that they cannot be used with condition variables.
The simple answer to your question is no. You cannot use the WaitForXxx functions with the condition variables provided by the Windows synchronization APIs. From the linked documentation:
Condition variables are synchronization primitives that enable threads to wait until a particular condition occurs. Condition variables are user-mode objects that cannot be shared across processes.
The WaitForXxx functions accept parameters of the generic HANDLE type, which represents a handle to a kernel object. Condition variables are user-mode objects, not kernel objects, so you cannot use them with these functions, since they work only with kernel objects.
Moreover, the documentation for these functions is pretty explicit about which types of objects they can wait on, and condition variables are not on that list. For instance, WaitForMultipleObjects says:
The WaitForMultipleObjects function can specify handles of any of the following object types in the lpHandles array:
Change notification
Console input
Event
Memory resource notification
Mutex
Process
Semaphore
Thread
Waitable timer
They all have the same list, so no confusion there.
Technically speaking (and we're diving into undocumented implementation details here, so you shouldn't rely on this as gospel), the Win32 WaitForSingleObject and WaitForMultipleObjects functions are built upon the KeWaitForSingleObject and KeWaitForMultipleObjects functions provided by the kernel subsystem. You can divide the objects supported by the kernel into three basic categories: dispatcher objects, I/O objects/data structures, and everything else. The first category, dispatcher objects, are the lowest level objects and they are all represented using the same DISPATCHER_HEADER data structure in their bodies. Dispatcher objects are the only types of objects that are "waitable". It is this DISPATCHER_HEADER structure that makes an object waitable, by definition. If the object is represented using this data structure, then it can be passed to the kernel synchronization functions. Thus, the same rules would apply to the Win32 functions.
This entire question seems to be based around a single statement that Managu makes in his answer: "Windows has WaitForMultipleObjects as aJ posted, which could be a solution if you're willing to restrict your code to Windows synchronization primitives." Perhaps he doesn't consider condition variables (as they are implemented by Windows) to be synchronization primitives, or perhaps he is just wrong. aJ's answer, to which he refers, is pretty clear about stating that WaitForMultipleObjects is used "to wait for multiple kernel objects," and we have already established that condition variables are not kernel objects. Either way, I don't see any evidence for an "urban legend" that you can do this.
Obviously you cannot use the WaitForXxx family of functions with boost::condition_variable, or std::condition_variable, or anything else. I'm sure you already knew that, but your question has confused some people because it links to a question that refers to the Boost implementation.
It is not especially clear to me why you would need to wait on multiple condition variables simultaneously. I guess you could write your own implementation of condition variables, based on the classic Win32 synchronization primitives, such as mutexes, which you can then wait on with WaitForMultipleObjects. You can probably find examples of such code online, since condition variables did not become part of the operating system until Vista. For example, this article discusses strategies for implementing condition variables in Windows as they are defined by the POSIX Pthreads specification. You could also look into using Event Objects.
I am new to threading . Correct me if I am wrong that mutex locks the access to a shared data structure so that it cannot be used by other threads until it is unlocked . So, lets consider that there are 2 or more shared data structures . So , should I make different mutex objects for different data structures ? If no ,then how std::mutex will know which object it should lock ? What If I have to lock more than 1 objects at the same time ?
There are several points in your question that can be made more precise. Perhaps clearing this will solve things for you.
To begin with, a mutex, by itself, does not lock access to anything. It is basically something that your code can lock and unlock, and some "magic" ensures that only one thread can lock it at a time.
If, by convention, you decide that any code accessing some data structure foo will first begin by locking a mutex foo_mutex, then it will have the effect of protecting this data structure.
So, having said that, regarding your questions:
It depends on whether the two data structures need to be accessed together or not (e.g., can updating one without the other leave the system in an inconsistent state). If so, you should lock them with a single mutex. If not, you can improve parallelism by using two.
The mutex does not lock anything. It is you who decide by convention whether you can access 1, 2, or a million data structures, while holding it.
If you always needs to access both structures then it could be considered as a single resource so only a single lock is needed.
If you sometimes, even just once, need to access one of the structures independently then they can no longer be considered a single resource and you might need two locks. Of course, a single lock could still be sufficient, but then that lock would lock both resources at once, prohibiting other threads from accessing any of the structures.
Mutex does not "know" anything other than about itself. The lock is performed on mutex itself.
If there are two objects (or pieces of code) that need synchronized access (but can be accessed at the same time) then you have the liberty to use just one mutex for both or one for each. If you use one mutex they will not be accessed at the same time from two different threads.
If it cannot happen that access to one object is required while accessing the other object then you can use two mutexes, one for each. But if it can happen that one object must be accessed while the thread already holds another mutex then care must be taken that code never can reach a deadlock, where two threads hold one mutex each, and both at the same time wait that the other mutex is released.
On a platform that only has events[1], mutexes, and semaphores[2] can I create a fair "wait on multiple events" implementation that returns when any of the events[3] is signaled/set. I'm assuming the existing primitives are fair.
[1] Event is a "flag" that has 4 ops: Set(), Clear(), Wait(), and WaitAndClear(). If you Wait() on an unset event, you block until someone Set()'s it. WaitAndClear() is what it sounds like, but atomic. All waiters are awoken.
[2] I do not believe the system supports semaphores values going negative.
[3] I say "events", but it could be a new object type that uses any of those primitives.
For windows, WaitForMultipleObjects with the third parameter set to false should work (also includes a timeout option). I've also seen a similar wait function implemented for a in house developed small kernel used in an X86 (80186) embedded device. For an in house kernel, if the maximum number of threads is fixed, then each event, semaphore, ..., can have an array of task control block addresses for any threads pending on that object. Another option is to make a rule that only one thread can wait for any event, semaphore, ... , (only one entry for each object type that would contain either null or the address of a pending task control block) and in the case where multiple threads need to be triggered, multiple events or semaphores would be used.
You need one of the following:
A non-blocking event tester
A ready made primitive, eg WaitForMultipleObjects
One thread per waited object, plus some overhead
If you cant have one of those, i don't think its doable.
When double-buffering data that's due to be shared between threads, I've used a system where one thread reads from one buffer, one thread reads from the other buffer and reads from the first buffer. The trouble is, how am I going to implement the pointer swap? Do I need to use a critical section? There's no Interlocked function available that will actually swap values. I can't have thread one reading from buffer one, then start reading from buffer two, in the middle of reading, that would be appcrash, even if the other thread didn't then begin writing to it.
I'm using native C++ on Windows in Visual Studio Ultimate 2010 RC.
Using critical sections is the accepted way of doing it. Just share a CRITICAL_SECTION object between all your threads and call EnterCriticalSection and LeaveCriticalSection on that object around your pointer manipulation/buffer reading/writing code. Try to finish your critical sections as soon as possible, leaving as much code outside the critical sections as possible.
Even if you use the double interlocked exchange trick, you still need a critical section or something to synchronize your threads, so might as well use it for this purpose too.
This sounds like a reader-writer-mutex type problem to me.
[ ... but I mostly do embedded development so this may make no sense for a Windows OS.
Actually, in an embedded OS with a priority-based scheduler, you can do this without any synchronization mechanism at all, if you guarantee that the swap is atomic and only allow the lower-priority thread to swap the buffers. ]
Suppose you have two buffers, B1 and B2, and you have two threads, T1 and T2. It's OK if T1 is using B1 while T2 is using B2. By "using" I mean reading and/or writing the buffer. Then at some time, the buffers need to swap so that T1 is using B2 and T2 is using B1. The thing you have to be careful of is that the swap is done while neither thread is accessing its buffer.
Suppose you used just one simple mutex. T1 could acquire the mutex and use B1. If T2 wanted to use B2, it would have to wait for the mutex. When T1 completed, T2 would unblock and do its work with B2. If either thread (or some third-party thread) wanted to swap the buffers, it would also have to take the mutex. Thus, using just one mutex serializes access to the buffers -- not so good.
It might work better if you use a reader-writer mutex instead. T1 could acquire a read-lock on the mutex and use B1. T2 could also acquire a read-lock on the mutex and use B2. When one of those threads (or a third-party thread) decides it's time to swap the buffers, it would have to take a write-lock on the mutex. It won't be able to acquire the write-lock until there are no more read-locks. At that point, it can swap the buffer pointers, knowing that nobody is using either buffer because when there is a write-lock on the mutex, all attempts to read-lock will block.
You have to build your own function to swap the pointers which uses a semaphore or critical section to control it. The same protection needs to be added to all users of pointers, since any code which reads a pointer which is in the midst of being modified is bad.
One way to manage this is to have all the pointer manipulation logic work under the protection of the lock.
Why can't you use InterlockedExchangePointer ?
edit: Ok, I get what you are saying now, IEP doesn't actually swap 2 live pointers with each other since it only takes a single value by reference.
See, I did originally design the threads so that they would be fully asynchronous and don't require any synchronizing in their regular operations. But, since I'm performing operations on a per-object basis in a thread pool, if a given object is unreadable because it's currently being synced, I can just do another while I'm waiting. In a sense, I can both wait and operate at the same time, since I have plenty of threads to go around.
Create two critical sections, one for each of the threads.
While rendering, hold the render crit section. The other thread can still do what it likes to the other crit section though. Use TryEnterCriticalSection, and if it's held, then return false, and add the object in a list to be re-rendered later. This should allow us to keep rendering even if a given object is currently being updated.
While updating, hold both crit sections.
While doing game logic, hold the game logic crit section. If it's already held, that's no problem, because we have more threads than actual processors. So if this thread is blocked, then another thread will just use the CPU time and this doesn't need to be managed.
You haven't mentioned what your Windows platform limitations are, but if you don't need compatibility with older versions than Windows Server 2003, or Vista on the client side, you can use the InterlockedExchange64() function to exchange a 64 bit value. By packing two 32-bit pointers into a 64-bit pair structure, you can effectively swap two pointers.
There are the usual Interlocked* variantions on that; InterlockedExchangeAcquire64(), InterlockedCompareExchange64(), etc...
If you need to run on, say, XP, I'd go for a critical section. When the chance of contention is low, they perform quite well.
Do I need a mutex if I have only one reader and one writer? The reader takes the next command (food.front()) from the queue and executes a task based on the command. After the command is executed, it pops off the command. The writer to the queue pushes commands onto the queue (food.push()).
Do I need a mutex? My reader (consumer) only executes if food.size() > 0. I am using a reader thread and send thread.
A mutex is used in multi-threaded environments. I don't see mention of threads in your question, so I don't see a need for a mutex.
However, if we assume by reader and writer you mean you have two threads, you need to protect mutual data with a mutex (or other multi-threaded protection scheme.)
What happens when the queue has items, and the reader thread pops something off while the writer thread puts something on? Disaster! With a mutex, you'll be sure only one thread is operating on the queue at a time.
Another method is a lock-free thread-safe queue. It would use atomic operations to ensure the data isn't manipulated incorrectly.
What happens if the reader sees the size is greater than zero, but the structure isn't yet completely updated?
That can be avoided by very carefully coding the updates, but the way to make the code resistant to future tampering updates is to use a mutex.
Assuming the "writer" and the "reader" are in separate threads:
Most probably yes: you could have a "metastable" state between the "writing" event and a "reading" event where the pointers to the structures are consistent.
Of course this depends on the implementation: if an atomic operation is used to update the pointers, you might be good without a mutex.
Depends entirely on the implementation, if you have two different threads accessing the same variables you will need a mutex. Otherwise you may for example end up with an inconsistent count.
Say in write you do ++count and in read you do --count and say the current value is 2. Now note that these statements do not need to be atomic, the ++count may consist of reading variable count, incrementing it and then writing it back again. No a write and read are simultaneously executed and say the first bit of the write is executed is executed (i.e. it loads value 2. then the whole read is executed decrementing the count, but the other thread still had value 2 loaded, which it increments and subsequently writes back to the variable. Now you just lost a read action.
your question depends on two conditions:
there are only two threads, one is producer, the other is consumer
the structure is designed for lock-free
if satisfy both, you can drop the lock, or you need to use a lock to protect the queue structure.
for dropping the lock, must remember to update the header or tailer pointer in the end of steps.