// locks a critical section, and unlocks it automatically
// when the lock goes out of scope
CAutoLock(CCritSec * plock)
The above is from wxutil.h, does it lock the access of different process , or just locks different threads in the same process?
Just across threads. From the doc of CAutoLock:
The CAutoLock constructor locks the critical section, ...
and CCritSec:
The CCritSec class provides a thread lock.
More explicitly, from the description of Critical Section Objects:
A critical section object provides synchronization similar to that provided by a mutex object, except that a critical section can be used only by the threads of a single process.
Related
I have been looking to implement a solution for readers-writers problem using threading/synchronization constructs introduced since c++11.
I ran into this question, the most voted answer has this code -
Reader Thread
// --- read code
rw_mtx.lock(); // will block if there is a write in progress
read_count += 1; // announce intention to read
rw_mtx.unlock();
cell_value = data_base[cell_number];
rw_mtx.lock();
read_count -= 1; // announce intention to read
if (read_count == 0) rw_write_q.notify_one();
rw_mtx.unlock();
Writer Thread
// --- write code
std::unique_lock<std::mutex> rw_lock(rw_mtx);
write_count += 1;
rw_write_q.wait(rw_lock, []{return read_count == 0;});
data_base[cell_number] = cell_value;
write_count -= 1;
if (write_count > 0) rw_write_q.notify_one();
In the writer thread, before writing to data_base[cell_number], shouldn't there be a memory-barrier/fence, to synchronize access to that shared memory?(Same for reader thread)
If you agree withe the above(yay!), how can this be acheived? Looking to improve my understanding here.
Thanks for your help!
Memory barriers in high level programming languages are considered intrinsic to the behavior of the locks provided by the language. From Wikipedia's Memory Barrier page:
Multithreaded programs usually use synchronization primitives provided by a high-level programming environment, such as Java and .NET Framework, or an application programming interface (API) such as POSIX Threads or Windows API. ... In such environments explicit use of memory barriers is not generally necessary.
Wikipedia
If you dig into the source code of pthread_mutex_lock, for example, you will see reliance on futex and atomic exchange functions, which would use a memory barrier.
Your comments seem to indicate that you do not understand why the code sample you pulled from the answer implements a readers-writer lock.
As mentioned in the answer you cited, the code you show has a fairness issue, in that a waiting writer may get starved out by a constant stream of readers. However, if we ignore that issue for now, let us first agree that:
1. No writer will enter the critical section if there is already at least one reader in the critical section.
This is because the condition variable waits for the reader count to reach zero. The way a condition variable works is that it releases the mutex if the condition is false, and acquires the lock when it is signaled. Upon acquiring the mutex, it re-tests the condition, and continues its hold on the mutex if the condition is true.
2. When there are no readers, only one writer will enter the critical section.
The critical section for the writer is both the reader count state of the readers writer lock and section of code that requires write lock protection.
Since the mutex is held when the condition is true, and the lock provides exclusive access, there is only one writer in the critical section which will be writing a new value into the array.
Upon completing the critical section, a new signal is raised on the condition variable to wake up any other waiters and the governing mutex is then released. Since the mutex provides exclusive access, upon release, only one thread will be allowed to acquire the mutex, whether it is a pending writer or reader.
3. Multiple readers may enter the read critical section.
The critical section for a reader is treated differently than the writer. For the reader, we assume the state of the array is synchronized with the most recent write when the lock is acquired. But, reads will not alter the state of the array. So, the acquired mutex critical section is the reader count. Meanwhile, the readers-writer lock (implemented using the mutex and condition variable) critical section includes the part of the code that requires read access to the array.
So, on entering a critical section, the acquired mutex is used to increment the reader count. Once the reader count is non-zero, the readers-writer lock is now being held by a reader and will cause writers to wait on the condition variable. Once the reader count is incremented, the acquired mutex can be released.
Since the mutex was released, a different reader thread can now acquire the mutex and also increment the reader count, and then release the mutex. This allows multiple readers to enter the read critical section. The readers-writer lock remains held for reading since the reader count is positive. The mutex is released to allow other readers to enter.
Upon completing the read critical section, the mutex is acquired to decrement the reader count. If the count is zero, a signal on the condition variable is raised to wake up any pending writers. Then, the mutex is release.
If you looked towards the end of the answer you cited, you would have noticed a mention that C++17 has introduced a much nicer way to implement readers-writer locks. That would be with a shared_mutex.
class DataBase {
// ...
mutable std::shared_mutex rwlock_;
std::vector<ElementType> data_base_;
// ...
ElementType reader (int n) const {
std::shared_lock lock(rwlock_);
return data_base_[n];
}
// ...
void writer (int n, ElementType v) {
std::unique_lock lock(rwlock_);
data_base_[n] = v;
}
// ...
};
Let's say we have a process with two threads. One thread does some work on some shared resource and periodically takes out a scoped lock on a boost::interprocess::mutex. The other thread causes a fork/exec, at some random time.
Thread 1
void takeLockDoWork() {
using namespace boost::interprocess;
managed_shared_memory segment(open_only, "xxx");
interprocess_sharable_mutex *mutex = segment.find<interprocess_sharable_mutex>("mymutex").first;
scoped_lock<interprocess_sharable_mutex> lock(*mutex);
// access or do work on a shared resource here
//lock automatically unlocks when scope is left.
}
Let's say Thread 2 forks right after the scoped_lock is taken out. Presumably the child process has the same lock state as the parent.
What happens? Will there now be a race condition with the parent process?
As long as you don't fork from a thread that is holding an interprocess_sharable_mutex or access memory that was being protected by a mutex, you're okay.
The mutex exists in shared memory, meaning that even though you forked, the mutex state wasn't duplicated; it exists in one place, accessible by both processes.
Because forking only maintains the forking thread in the child, only the other thread in the parent thinks it has ownership of the mutex, so there's no problem. Even if you tried to acquire the mutex after forking, you would still be okay; it would just block until the parent releases it.
I've been trying to learn how to multithread and came up with the following understanding. I was wondering if I'm correct or far off and, if I'm incorrect in any way, if someone could give me advice.
To create a thread, first you need to utilize a library such as <thread> or any alternative (I'm using boost's multithreading library to get cross-platform capabilities). Afterwards, you can create a thread by declaring it as such (for std::thread)
std::thread thread (foo);
Now, you can use thread.join() or thread.detach(). The former will wait until the thread finishes, and then continue; while, the latter will run the thread alongside whatever you plan to do.
If you want to protect something, say a vector std::vector<double> data, from threads accessing simultaneously, you would use a mutex.
Mutex's would be declared as a global variable so that they may access the thread functions (OR, if you're making a class that will be multithreaded, the mutex can be declared as a private/public variable of the class). Afterwards, you can lock and unlock a thread using a mutex.
Let's take a quick look at this example pseudo code:
std::mutex mtx;
std::vector<double> data;
void threadFunction(){
// Do stuff
// ...
// Want to access a global variable
mtx.lock();
data.push_back(3.23);
mtx.unlock();
// Continue
}
In this code, when the mutex locks down on the thread, it only locks the lines of code between it and mtx.unlock(). Thus, other threads will still continue on their merry way until they try accessing data (Note, we would likely through a mutex in the other threads as well). Then they would stop, wait to use data, lock it, push_back, unlock it and continue. Check here for a good description of mutex's.
That's about it on my understanding of multithreading. So, am I horribly wrong or accurate?
Your comments refer to "locking the whole thread". You can't lock part of a thread.
When you lock a mutex, the current thread takes ownership of the mutex. Conceptually, you can think of it as the thread places its mark on the mutex (stores its threadid in the mutex data structure). If any other thread comes along and attempts to acquire the same mutex instance, it sees that the mutex is already "claimed" by somebody else and it waits until the first thread has released the mutex. When the owning thread later releases the mutex, one of the threads that is waiting for the mutex can wake up, acquire the mutex for themselves, and carry on.
In your code example, there is a potential risk that the mutex might not be released once it is acquired. If the call to data.push_back(xxx) throws an exception (out of memory?), then execution will never reach mtx.unlock() and the mutex will remain locked forever. All subsequent threads that attempt to acquire that mutex will drop into a permanent wait state. They'll never wake up because the thread that owns the mutex is toast.
For this reason, acquiring and releasing critical resources like mutexes should be done in a manner that will guarantee they will be released regardless of how execution leaves the current scope. In other languages, this would mean putting the mtx.unlock() in the finally section of a try..finally block:
mtx.lock();
try
{
// do stuff
}
finally
{
mtx.unlock();
}
C++ doesn't have try..finally statements. Instead, C++ leverages its language rules for automatic disposal of locally defined variables. You construct an object in a local variable, the object acquires a mutex lock in its constructor. When execution leaves the current function scope, C++ will make sure that the object is disposed, and the object releases the lock when it is disposed. That's the RAII others have mentioned. RAII just makes use of the existing implicit try..finally block that wraps every C++ function body.
What is the difference between above two?
This question came to my mind because I found that
Monitors and locks provide mutual exclusion
Semaphores and conditional variables provide synchronization
Is this true?
Also while searching I found this article
Any clarifications please.
Mutual exclusion means that only a single thread should be able to access the shared resource at any given point of time. This avoids the race conditions between threads acquireing the resource. Monitors and Locks provide the functionality to do so.
Synchronization means that you synchronize/order the access of multiple threads to the shared resource.
Consider the example:
If you have two threads, Thread 1 & Thread 2.
Thread 1 and Thread 2 execute in parallel but before Thread 1 can execute say a statement A in its sequence it is a must that Thread 2 should execute a statement B in its sequence. What you need here is synchronization. A semaphore provides that. You put a semapohore wait before the statement A in Thread 1 and you post to the semaphore after statement B in Thread 2.
This ensures the synchronization you need.
The best way to understand the difference is with the help of an example.Below is the program to solve the classical producer consumer problem via semaphore.To provide mutual exclusion we genrally use a binary semaphore or mutex and to provide synchronization we use counting semaphore.
BufferSize = 3;
semaphore mutex = 1; // used for mutual exclusion
semaphore empty = BufferSize; // used for synchronization
semaphore full = 0; // used for synchronization
Producer()
{
int widget;
while (TRUE) { // loop forever
make_new(widget); // create a new widget to put in the buffer
down(&empty); // decrement the empty semaphore
down(&mutex); // enter critical section
put_item(widget); // put widget in buffer
up(&mutex); // leave critical section
up(&full); // increment the full semaphore
}
}
Consumer()
{
int widget;
while (TRUE) { // loop forever
down(&full); // decrement the full semaphore
down(&mutex); // enter critical section
remove_item(widget); // take a widget from the buffer
up(&mutex); // leave critical section
consume_item(widget); // consume the item
}
}
In the above code the mutex variable provides mutual exclusion(allow only one thread to access critical section) whereas full and the empty variable are used for synchonization(to aribtrate the access of shared resource among various thread).
Let's say I have a class with the function
class foo
{
...
void bar() {
OutputDebugString(........);
// mode code
}
}
Is it possible to print the ID of the current thread (or if it's the main application) that is executing the function using OutputDebugString?
I have a large application I'm debugging and have found a deadlock situation and would like to check which threads are included in the deadlock. Since it could possibly be the same thread that is locking it's own critical section.
Have a look at the GetCurrentThread function.
Use GetCurrentThreadId().
Note that a thread cannot deadlock itself on a critical section. Once a thread has obtained the lock to the critical section, it can freeing re-enter that same lock as much as it wants (same thing with a mutex). Just make sure to unlock the critical section for each successful lock (re)entry so that OTHER threads do not become deadlocked.