Get ID of executing Process/Thread in C++ Builder - c++

Let's say I have a class with the function
class foo
{
...
void bar() {
OutputDebugString(........);
// mode code
}
}
Is it possible to print the ID of the current thread (or if it's the main application) that is executing the function using OutputDebugString?
I have a large application I'm debugging and have found a deadlock situation and would like to check which threads are included in the deadlock. Since it could possibly be the same thread that is locking it's own critical section.

Have a look at the GetCurrentThread function.

Use GetCurrentThreadId().
Note that a thread cannot deadlock itself on a critical section. Once a thread has obtained the lock to the critical section, it can freeing re-enter that same lock as much as it wants (same thing with a mutex). Just make sure to unlock the critical section for each successful lock (re)entry so that OTHER threads do not become deadlocked.

Related

How to pass Critical Section to another thread?

I have 3 threads, resumed at the same time, calling the same function with different arguments. How can I force the thread to leave Critical Section and pass it to another thread?
When I run the code below, the while loop is called many times until another thread enters the Critical Section (and it also loops many times).
DWORD WINAPI ClientThread(LPVOID lpParam)
{
// thread logic
while(true)
{
EnterCriticalSection(&critical);
// thread logic
LeaveCriticalSection(&critical);
Sleep(0);
}
// thread logic
return 0;
}
In other words, how can I prevent a thread from instantly reentering a section again?
You can't ask a thread directly to leave the critical section. The thread will leave it, when it has finished executing.
So the only way would be to prevent it from entering the critical section, or "ask" it to finish early. Eg. by checking in the section continuously for an atomic_flag and aborting stopping the thread's operation if it has been checked.
If you want to prevent a thread from reentering a section directly after it has left, you could yield it, this will reschedule the execution of threads.
If you want an exact ordering from threads (A->B->C->D->A->B ...) you need to write a custom scheduler or a custom "fair_mutex" who detects other waiting threads.
Edit:
Such a function would be BOOL SwitchToThread(); doc
As mentioned in another answer, you need Fair Mutex, and Ticket Lock may be one of ways to implement it.
There's another way, based on binary semaphore, and it is actually close to what Critical Section used to be. Like this:
class old_cs
{
public:
old_cs()
{
event = CreateEvent(NULL, /* bManualReset = */ FALSE, /* bInitialState =*/ TRUE, NULL);
if (event == NULL) throw std::runtime_error("out of resources");
}
~old_cs()
{
CloseHandle(event);
}
void lock()
{
if (count.fetch_add(1, std::memory_order_acquire) > 0)
WaitForSingleObject(event, INFINITE);
}
void unlock()
{
if (count.fetch_sub(1, std::memory_order_release) > 1)
SetEvent(event);
}
old_cs(const old_cs&) = delete;
old_cs(old_cs&&) = delete;
old_cs& operator=(const old_cs&) = delete;
old_cs& operator=(old_cs&&) = delete;
private:
HANDLE event;
std::atomic<std::size_t> count = 0;
};
You may find the following in Critical Section Objects documentation:
Starting with Windows Server 2003 with Service Pack 1 (SP1), threads
waiting on a critical section do not acquire the critical section on a
first-come, first-serve basis. This change increases performance
significantly for most code. However, some applications depend on
first-in, first-out (FIFO) ordering and may perform poorly or not at
all on current versions of Windows (for example, applications that
have been using critical sections as a rate-limiter). To ensure that
your code continues to work correctly, you may need to add an
additional level of synchronization. For example, suppose you have a
producer thread and a consumer thread that are using a critical
section object to synchronize their work. Create two event objects,
one for each thread to use to signal that it is ready for the other
thread to proceed. The consumer thread will wait for the producer to
signal its event before entering the critical section, and the
producer thread will wait for the consumer thread to signal its event
before entering the critical section. After each thread leaves the
critical section, it signals its event to release the other thread.
So the algorithm inthis post is a simplified version of what Critical Section used to be in Windows XP and earlier.
The above algorithm is not a complete critical section, it lack recursion support, spinning, low resources situation handling.
Also it relies on Windows Event fairness.

Critical section containing another critical section?

Is it permissible to nest critical section like this below?:
void somefunction()
{
EnterCriticalSection(&g_List);
...
EnterCriticalSection(&g_Variable);
...
LeaveCriticalSection(&g_Variable);
...
LeaveCriticalSection(&g_List);
}
Yes, this is acceptable. It is the norm for any slightly complicated program to have many layers of nesting in places.
The one thing you need to be aware of is that you must always take locks in the same order.
If you don't do this, you risk deadlocks in scenarios like
Thread A runs code like:
EnterCriticalSection(&g_List);
EnterCriticalSection(&g_Variable);
...
LeaveCriticalSection(&g_Variable);
LeaveCriticalSection(&g_List);
but thread B runs
EnterCriticalSection(&g_Variable);
EnterCriticalSection(&g_List);
...
LeaveCriticalSection(&g_List);
LeaveCriticalSection(&g_Variable);
This risks a deadlock where thread A locks g_List then blocks waiting on g_Variable while thread B has locked g_Variable and is blocked waiting on g_List

Multithreading Clarification

I've been trying to learn how to multithread and came up with the following understanding. I was wondering if I'm correct or far off and, if I'm incorrect in any way, if someone could give me advice.
To create a thread, first you need to utilize a library such as <thread> or any alternative (I'm using boost's multithreading library to get cross-platform capabilities). Afterwards, you can create a thread by declaring it as such (for std::thread)
std::thread thread (foo);
Now, you can use thread.join() or thread.detach(). The former will wait until the thread finishes, and then continue; while, the latter will run the thread alongside whatever you plan to do.
If you want to protect something, say a vector std::vector<double> data, from threads accessing simultaneously, you would use a mutex.
Mutex's would be declared as a global variable so that they may access the thread functions (OR, if you're making a class that will be multithreaded, the mutex can be declared as a private/public variable of the class). Afterwards, you can lock and unlock a thread using a mutex.
Let's take a quick look at this example pseudo code:
std::mutex mtx;
std::vector<double> data;
void threadFunction(){
// Do stuff
// ...
// Want to access a global variable
mtx.lock();
data.push_back(3.23);
mtx.unlock();
// Continue
}
In this code, when the mutex locks down on the thread, it only locks the lines of code between it and mtx.unlock(). Thus, other threads will still continue on their merry way until they try accessing data (Note, we would likely through a mutex in the other threads as well). Then they would stop, wait to use data, lock it, push_back, unlock it and continue. Check here for a good description of mutex's.
That's about it on my understanding of multithreading. So, am I horribly wrong or accurate?
Your comments refer to "locking the whole thread". You can't lock part of a thread.
When you lock a mutex, the current thread takes ownership of the mutex. Conceptually, you can think of it as the thread places its mark on the mutex (stores its threadid in the mutex data structure). If any other thread comes along and attempts to acquire the same mutex instance, it sees that the mutex is already "claimed" by somebody else and it waits until the first thread has released the mutex. When the owning thread later releases the mutex, one of the threads that is waiting for the mutex can wake up, acquire the mutex for themselves, and carry on.
In your code example, there is a potential risk that the mutex might not be released once it is acquired. If the call to data.push_back(xxx) throws an exception (out of memory?), then execution will never reach mtx.unlock() and the mutex will remain locked forever. All subsequent threads that attempt to acquire that mutex will drop into a permanent wait state. They'll never wake up because the thread that owns the mutex is toast.
For this reason, acquiring and releasing critical resources like mutexes should be done in a manner that will guarantee they will be released regardless of how execution leaves the current scope. In other languages, this would mean putting the mtx.unlock() in the finally section of a try..finally block:
mtx.lock();
try
{
// do stuff
}
finally
{
mtx.unlock();
}
C++ doesn't have try..finally statements. Instead, C++ leverages its language rules for automatic disposal of locally defined variables. You construct an object in a local variable, the object acquires a mutex lock in its constructor. When execution leaves the current function scope, C++ will make sure that the object is disposed, and the object releases the lock when it is disposed. That's the RAII others have mentioned. RAII just makes use of the existing implicit try..finally block that wraps every C++ function body.

EnterCriticalSection doesn't lock

This is a small part of my code:
CRITICAL_SECTION _cs;
InitializeCriticalSection(&_cs);
void lock() {
if (_initizalized){
EnterCriticalSection(&_cs);
EnterCriticalSection(&_cs);
_locked = true;
}
}
(I wrote "EnterCriticalSection" twice , because I noticed that this line doesn't work)
As I understand, this must cause a deadlock. but it doesn't. why?
No the same thread can enter it as often as it wants.
CRITICAL_SECTION is used to restrict access between multiple different threads.
EnterCriticalSection allows for recursive calls from the same thread. From the documentation:
After a thread has ownership of a critical section, it can make
additional calls to EnterCriticalSection or TryEnterCriticalSection
without blocking its execution. This prevents a thread from
deadlocking itself while waiting for a critical section that it
already owns. The thread enters the critical section each time
EnterCriticalSection and TryEnterCriticalSection succeed. A thread
must call LeaveCriticalSection once for each time that it entered the
critical section.
As MSDN says,
After a thread has ownership of a critical section, it can make additional calls to EnterCriticalSection or TryEnterCriticalSection without blocking its execution. This prevents a thread from deadlocking itself while waiting for a critical section that it already owns. The thread enters the critical section each time EnterCriticalSection and TryEnterCriticalSection succeed. A thread must call LeaveCriticalSection once for each time that it entered the critical section.

C++ context switch and mutex problem

Ok.. here is some background on the issue. I have some 'critical' code that i'm trying to protect with a mutex. It goes something like this
Mutex.Lock()
// critical code
// some file IO
Mutex.Unlock().
Now the issue is that my program seems to be 'stuck' due to this. Let me explain with an example.
Thread_1 comes in; and go to Mutex.Lock() and starts executing the critical code. In the critical code; it needs to do some File IO. Now at this point; I believe a 'context switch' happens and Thread_2 comes in and blocks on the Mutex.Lock() (since Thread_1 has the lock). All seems fine but in my case; the program 'hangs' here.. The only thing I can think of is that somehow Thread_2 keeps blocking for ever and doesn't switch back to Thread_1??
More info: using pthread_mutex_init and pthread_mutex_lock on linux.
As others have mentioned, you probably have a deadlock.
Sidenote:
You'll want to make sure that there aren't any uncaught exceptions thrown in the critical block of code. Otherwise the lock will never be released. You can use an RAII lock to overcome this issue:
class SingleLock {
public:
SingleLock(Mutex &m) : m(m) { m.Lock(); }
~SingleLock() { m.Unlock(); }
private:
Mutex m;
};
...
{
SingleLock lock(mutex);
// critical code // some file IO
}
...
This sounds like a deadlock where Thread_1 is in the mutext and waiting on Thread_2 to release something, while Thread_2 is waiting to enter the mutex and so can't release whatever it is that Thread_1 needs.
edit: swapped thread names to more closely match the scenario in the question, added 'in the mutex'
The best solution for something like this is to use the debugger (gdb?). It is better if you use any IDE with debugger (eclipse?) to make debugging easier and more visual.
Like this you will see the location at which every thread is waiting.
What I expect is that Thread1 locks the mutex to enter the critical section, the stuck in the IO (may be wrong reading or infinite loop) and thread two is normally waiting for Mutex to be unlocked.
It doesn't seem that this is a dead lock, because dead lock can't happen with a single mutex!
The context switch is irrelevant so long as there's just one lock. The other thread can't do anything to affect the first one as it will just be waiting on the lock until it gets it. So the problem is with the first thread somehow. Debuggers are pretty much worthless for multithreading but deadlocks are usually easy to resolve, as someone pointed out probably the first thread is in an infinite loop somehow.
Does the File I/O need to be part of the critical section? If Thread 1 is doing a blocking read, and Thread 2 is what is supposed to be writing to that file (or pipe or similar), then Thread 1 will never return to release the mutex. You should evaluate your critical sections to determine what actually needs to be protected by the mutex. It's considered good practice to have your critical sections be as small as possible.