This is a small part of my code:
CRITICAL_SECTION _cs;
InitializeCriticalSection(&_cs);
void lock() {
if (_initizalized){
EnterCriticalSection(&_cs);
EnterCriticalSection(&_cs);
_locked = true;
}
}
(I wrote "EnterCriticalSection" twice , because I noticed that this line doesn't work)
As I understand, this must cause a deadlock. but it doesn't. why?
No the same thread can enter it as often as it wants.
CRITICAL_SECTION is used to restrict access between multiple different threads.
EnterCriticalSection allows for recursive calls from the same thread. From the documentation:
After a thread has ownership of a critical section, it can make
additional calls to EnterCriticalSection or TryEnterCriticalSection
without blocking its execution. This prevents a thread from
deadlocking itself while waiting for a critical section that it
already owns. The thread enters the critical section each time
EnterCriticalSection and TryEnterCriticalSection succeed. A thread
must call LeaveCriticalSection once for each time that it entered the
critical section.
As MSDN says,
After a thread has ownership of a critical section, it can make additional calls to EnterCriticalSection or TryEnterCriticalSection without blocking its execution. This prevents a thread from deadlocking itself while waiting for a critical section that it already owns. The thread enters the critical section each time EnterCriticalSection and TryEnterCriticalSection succeed. A thread must call LeaveCriticalSection once for each time that it entered the critical section.
Related
I have 3 threads, resumed at the same time, calling the same function with different arguments. How can I force the thread to leave Critical Section and pass it to another thread?
When I run the code below, the while loop is called many times until another thread enters the Critical Section (and it also loops many times).
DWORD WINAPI ClientThread(LPVOID lpParam)
{
// thread logic
while(true)
{
EnterCriticalSection(&critical);
// thread logic
LeaveCriticalSection(&critical);
Sleep(0);
}
// thread logic
return 0;
}
In other words, how can I prevent a thread from instantly reentering a section again?
You can't ask a thread directly to leave the critical section. The thread will leave it, when it has finished executing.
So the only way would be to prevent it from entering the critical section, or "ask" it to finish early. Eg. by checking in the section continuously for an atomic_flag and aborting stopping the thread's operation if it has been checked.
If you want to prevent a thread from reentering a section directly after it has left, you could yield it, this will reschedule the execution of threads.
If you want an exact ordering from threads (A->B->C->D->A->B ...) you need to write a custom scheduler or a custom "fair_mutex" who detects other waiting threads.
Edit:
Such a function would be BOOL SwitchToThread(); doc
As mentioned in another answer, you need Fair Mutex, and Ticket Lock may be one of ways to implement it.
There's another way, based on binary semaphore, and it is actually close to what Critical Section used to be. Like this:
class old_cs
{
public:
old_cs()
{
event = CreateEvent(NULL, /* bManualReset = */ FALSE, /* bInitialState =*/ TRUE, NULL);
if (event == NULL) throw std::runtime_error("out of resources");
}
~old_cs()
{
CloseHandle(event);
}
void lock()
{
if (count.fetch_add(1, std::memory_order_acquire) > 0)
WaitForSingleObject(event, INFINITE);
}
void unlock()
{
if (count.fetch_sub(1, std::memory_order_release) > 1)
SetEvent(event);
}
old_cs(const old_cs&) = delete;
old_cs(old_cs&&) = delete;
old_cs& operator=(const old_cs&) = delete;
old_cs& operator=(old_cs&&) = delete;
private:
HANDLE event;
std::atomic<std::size_t> count = 0;
};
You may find the following in Critical Section Objects documentation:
Starting with Windows Server 2003 with Service Pack 1 (SP1), threads
waiting on a critical section do not acquire the critical section on a
first-come, first-serve basis. This change increases performance
significantly for most code. However, some applications depend on
first-in, first-out (FIFO) ordering and may perform poorly or not at
all on current versions of Windows (for example, applications that
have been using critical sections as a rate-limiter). To ensure that
your code continues to work correctly, you may need to add an
additional level of synchronization. For example, suppose you have a
producer thread and a consumer thread that are using a critical
section object to synchronize their work. Create two event objects,
one for each thread to use to signal that it is ready for the other
thread to proceed. The consumer thread will wait for the producer to
signal its event before entering the critical section, and the
producer thread will wait for the consumer thread to signal its event
before entering the critical section. After each thread leaves the
critical section, it signals its event to release the other thread.
So the algorithm inthis post is a simplified version of what Critical Section used to be in Windows XP and earlier.
The above algorithm is not a complete critical section, it lack recursion support, spinning, low resources situation handling.
Also it relies on Windows Event fairness.
《C++ Concurrency In Action》 implements an interruptible thread in Chapter 9.2 Interrupting thread. Listing 9.10 is below:
void interruptible_wait(std::condition_variable& cv,
std::unique_lock<std::mutex>& lk)
{
interruption_point();
this_thread_interrupt_flag.set_condition_variable(cv);
cv.wait(lk);
this_thread_interrupt_flag.clear_condition_variable();
interruption_point();
}
According to the book, this function introduces the problem below:
If the thread is interrupted after the initial call to interruption_point(), but before the call to wait(), then it doesn’t matter whether the condition variable has been associated with the interrupt flag, because the thread isn’t waiting and so can’t be woken by a notify on the condition variable. You need to ensure that the thread can’t be notified between the last check for interruption and the call to wait().
The first question is why we need to ensure that? 'Cause this function seems to run correctly even the thread is interrupted after the initial call to interruption_point() and before the call to wait(). Could anyone tell me how this function will go south? Is it because cv.wait(lk) will never be notified under this situation?
The second question is how Listing 9.11 solve this problem the book mentions just by replacing cv.wait() by cv.wait_for():
void interruptible_wait(std::condition_variable& cv,
std::unique_lock<std::mutex>& lk)
{
interruption_point();
this_thread_interrupt_flag.set_condition_variable(cv);
interrupt_flag::clear_cv_on_destruct guard;
interruption_point();
cv.wait_for(lk,std::chrono::milliseconds(1));
interruption_point();
}
If the other thread calls notify() before this thread gets to wait(), this thread won't receive that notification, and will wait forever for another one.
wait_for doesn't wait forever.
I'm new to the boost library, and it's such an amazing library! Also, I am new to mutexes, so forgive me if I am making a newbie mistake.
Anyway, I have two functions called FunctionOne and FunctionTwo. FunctionOne and FunctionTwo are called asynchronously by a different thread. So here's what happens: In FunctionOne, I lock a global mutex at the beginning of the function and unlock the global mutex at the end of the function. Same thing for FunctionTwo.
Now here's the problem: at times, FunctionOne and FunctionTwo are called less than a few milliseconds apart (not always though). So, FunctionOne begins to execute and half-way through FunctionTwo executes. When FunctionTwo locks the mutex, the entire thread that FunctionOne and FunctionTwo are on is stopped, so FunctionOne is stuck half-way through and the thread waits on itself in FunctionTwo forever. So, to summarize:
Function 1 locks mutex and begins executing code.
Function 2 is called a few ms later and locks the mutex, freezing the thread func 1 and 2 are on.
Now func 1 is stuck half-way through and the thread is frozen, so func 1 never finishes and the mutex is locked forever, waiting for func 1 to finish.
What does one do in such situations? Here is my code:
boost::mutex g_Mutex;
lua_State* L;
// Function 1 is called from some other thread
void FunctionOne()
{
g_Mutex.lock();
lua_performcalc(L);
g_Mutex.unlock();
}
// Function 2 is called from some other thread a few ms later, freezing the thread
// and Function 1 never finishes
void FunctionTwo()
{
g_Mutex.lock();
lua_performothercalc(L);
g_Mutex.unlock();
}
Are these functions intended to be re-entrant, such that FunctionOne will call itself or FunctionTwo while holding the mutex? Or vice versa, with FunctionTwo locking the mutex and then calling FunctionOne/FunctionTwo while the mutex is locked?
If not, then you should not be calling these two functions from the same thread. If you intend FunctionTwo to block until FunctionOne has completed then it is a mistake to have it called on the same thread. That would happen if lua_performcalc ends up calling FunctionTwo. That'd be the only way they could be called on the same thread.
If so, then you need a recursive_mutex. A regular mutex can only be locked once; locking it again from the same thread is an error. A recursive mutex can be locked multiple times by a single thread and is locked until the thread calls unlock an equal number of times.
In either case, you should avoid calling lock and unlock explicitly. If an exception is thrown the mutex won't get unlocked. It's better to use RAII-style locking, like so:
{
boost::recursive_mutex::scoped_lock lock(mutex);
...critical section code...
// mutex is unlocked when 'lock' goes out of scope
}
your description is incorrect. a mutex cannot be locked twice. you have a different problem.
check for reentrance while the mutex is locked.
check for exceptions
to avoid problems with exceptions you should use boost::mutex::scoped_lock (RAAI)
I am currently trying to create a very simple thread pool using std::thread.
In order to maintain threads 'alive' after their given task is done, I associate a std::mutex with each one of them.
The principle is somewhat like this:
// Thread loop
while (1)
{
m_oMutex->lock();
m_oMutex->unlock();
m_bAvailable = false;
m_oTask();
m_bAvailable = true;
}
// ThreadPool function which gives a task to a thread
void runTask(boost::function<void ()> oTask)
{
[...]
m_oThreads[i]->setTask(oTask);
m_oMutexes[i]->unlock(); // same mutex as thread's m_oMutex
m_oMutexes[i]->lock();
}
To find the i, the ThreadPool searches for a thread object with m_bAvailable set to true. It unlocks the corresponding mutex so the thread can lock it and execute its task. The thread unlocks it immediately so the ThreadPool can lock it again so the thread is halted once its task is done.
But the question is, will locks be made in the order the threads ask them? In other words, if a thread does a lock on a mutex, then the ThreadPool unlocks it and locks it again, am I sure that the lock will be given to the thread first? If not, is there a way to ensure it?
No, you cannot guarantee that your thread loop will ever acquire the lock with your example as is. Use a conditional variable to signal to the thread loop that it should awake and take the lock. See std::condition_variable::wait(...).
condition-variable
More on this topic in general can be found here http://en.wikipedia.org/wiki/Condition_variable. If you were using the pthread library, the equivalent call would be pthread_cond_wait in your "Thread loop" and pthread_cond_signal in your runTask function.
Let's say I have a class with the function
class foo
{
...
void bar() {
OutputDebugString(........);
// mode code
}
}
Is it possible to print the ID of the current thread (or if it's the main application) that is executing the function using OutputDebugString?
I have a large application I'm debugging and have found a deadlock situation and would like to check which threads are included in the deadlock. Since it could possibly be the same thread that is locking it's own critical section.
Have a look at the GetCurrentThread function.
Use GetCurrentThreadId().
Note that a thread cannot deadlock itself on a critical section. Once a thread has obtained the lock to the critical section, it can freeing re-enter that same lock as much as it wants (same thing with a mutex). Just make sure to unlock the critical section for each successful lock (re)entry so that OTHER threads do not become deadlocked.