Basically I need a replacement for Condition Variable and SleepConditionVariableCS because it only support Vista and UP. (For C++)
Some suggested to use Semaphore, I also found CreateEvent.
Basically, I need to have on thread waiting on WaitForSingleObject, until something one or more others thread tell me there is something to do.
In which context should I use a Semaphore vs an Win Event?
Thanks
In your case I'd use an event myself. Signal the event when you want the thread to get going. Job done :)
Edit: The difference between semaphores and events comes down to the internal count. If there are multiple ReleaseSemaphores then 2 WaitForSingleObjects will also be released. Events are boolean by nature. If 2 different places Signal event simultaneously then the wait will get released and it will get set back to unsignalled (dependent on if you have automatic or manual resetting). If you need it to be signalled from multiple places simultaneously and for the waiting thread to run twice then this event behaviour could lead to a deadlock.
Replacing condition variables on Windows is extremely difficult and error-prone in the general case. Either:
Use someone else's implementation (e.g., Boost.Thread).
Rethink the problem you are trying to solve and see if Win32 can do it. Based on your description, an Event might suffice, but if the waiter needs to be triggered by some conditional expression that the other threads will setup, and not just a signal, you're better off going back to option 1.
Use boost::condition_variable if at all possible. I've been down this road before (see msg on microsoft.public.win32.programmer.kernel) and the Win32 Event API does not suffice; there are problems using events.
Related
In my project for some reason I am creating my thread in suspended state and after some state I am resuming the thread. So when resuming the thread sometime it might work or sometime it is not. So what is the proper way of handling the error if doesn't work?? Should I retry for resuming the thread or should I wait for some time or any possible appropriate handling mechanism?? Please guide me the best way of handling the scenarios. I am using ACE thread library here.
May I suggest a message queue? A simple implementation would just be an std::vector of updates, where each value in the vector represents an update like loading an asset (whose update architecture would include success/error code and filename) or letting another thread know that a key has been pressed/released.
EDIT:
As Damon said, you also need a mutex so only one thread at a time is editing the message queue.
So, the situation is this. I've got a C++ library that is doing some interprocess communication, with a wait() function that blocks and waits for an incoming message. The difficulty is that I need a timed wait, which will return with a status value if no message is received in a specified amount of time.
The most elegant solution is probably to rewrite the library to add a timed wait to its API, but for the sake of this question I'll assume it's not feasible. (In actuality, it looks difficult, so I want to know what the other option is.)
Here's how I'd do this with a busy wait loop, in pseudocode:
while(message == false && current_time - start_time < timeout)
{
if (Listener.new_message()) then message = true;
}
I don't want a busy wait that eats processor cycles, though. And I also don't want to just add a sleep() call in the loop to avoid processor load, as that means slower response. I want something that does this with a proper sort of blocks and interrupts. If the better solution involves threading (which seems likely), we're already using boost::thread, so I'd prefer to use that.
I'm posting this question because this seems like the sort of situation that would have a clear "best practices" right answer, since it's a pretty common pattern. What's the right way to do it?
Edit to add: A large part of my concern here is that this is in a spot in the program that's both performance-critical and critical to avoid race conditions or memory leaks. Thus, while "use two threads and a timer" is helpful advice, I'm still left trying to figure out how to actually implement that in a safe and correct way, and I can easily see myself making newbie mistakes in the code that I don't even know I've made. Thus, some actual example code would be really appreciated!
Also, I have a concern about the multiple-threads solution: If I use the "put the blocking call in a second thread and do a timed-wait on that thread" method, what happens to that second thread if the blocked call never returns? I know that the timed-wait in the first thread will return and I'll see that no answer has happened and go on with things, but have I then "leaked" a thread that will sit around in a blocked state forever? Is there any way to avoid that? (Is there any way to avoid that and avoid leaking the second thread's memory?) A complete solution to what I need would need to avoid having leaks if the blocking call doesn't return.
You could use sigaction(2) and alarm(2), which are both POSIX. You set a callback action for the timeout using sigaction, then you set a timer using alarm, then make your blocking call. The blocking call will be interrupted if it does not complete within your chosen timeout (in seconds; if you need finer granularity you can use setitimer(2)).
Note that signals in C are somewhat hairy, and there are fairly onerous restriction on what you can do in your signal handler.
This page is useful and fairly concise:
http://www.gnu.org/s/libc/manual/html_node/Setting-an-Alarm.html
What you want is something like select(2), depending on the OS you are targeting.
It sounds like you need a 'monitor', capable of signaling availability of resource to threads via a shared mutex (typically). In Boost.Thread a condition_variable could do the job.
You might want to look at timed locks: Your blocking method can aquire the lock before starting to wait and release it as soon as the data is availabe. You can then try to acquire the lock (with a timeout) in your timed wait method.
Encapsulate the blocking call in a separate thread. Have an intermediate message buffer in that thread that is guarded by a condition variable (as said before). Make your main thread timed-wait on that condition variable. Receive the intermediately stored message if the condition is met.
So basically put a new layer capable of timed-wait between the API and your application. Adapter pattern.
Regarding
what happens to that second thread if the blocked call never returns?
I believe there is nothing you can do to recover cleanly without cooperation from the called function (or library). 'Cleanly' means cleaning up all resources owned by that thread, including memory, other threads, locks, files, locks on files, sockets, GPU resources... Un-cleanly, you can indeed kill the runaway thread.
The Windows and Solaris thread APIs both allow a thread to be created in a "suspended" state. The thread only actually starts when it is later "resumed". I'm used to POSIX threads which don't have this concept, and I'm struggling to understand the motivation for it. Can anyone suggest why it would be useful to create a "suspended" thread?
Here's a simple illustrative example. WinAPI allows me to do this:
t = CreateThread(NULL,0,func,NULL,CREATE_SUSPENDED,NULL);
// A. Thread not running, so do... something here?
ResumeThread(t);
// B. Thread running, so do something else.
The (simpler) POSIX equivalent appears to be:
// A. Thread not running, so do... something here?
pthread_create(&t,NULL,func,NULL);
// B. Thread running, so do something else.
Does anyone have any real-world examples where they've been able to do something at point A (between CreateThread & ResumeThread) which would have been difficult on POSIX?
To preallocate resources and later start the thread almost immediately.
You have a mechanism that reuses a thread (resumes it), but you don't have actually a thread to reuse and you must create one.
It can be useful to create a thread in a suspended state in many instances (I find) - you may wish to get the handle to the thread and set some of it's properties before allowing it to start using the resources you're setting up for it.
Starting is suspended is much safer than starting it and then suspending it - you have no idea how far it's got or what it's doing.
Another example might be for when you want to use a thread pool - you create the necessary threads up front, suspended, and then when a request comes in, pick one of the threads, set the thread information for the task, and then set it as schedulable.
I dare say there are ways around not having CREATE_SUSPENDED, but it certainly has its uses.
There are some example of uses in 'Windows via C/C++' (Richter/Nasarre) if you want lots of detail!
There is an implicit race condition in CreateThread: you cannot obtain the thread ID until after the thread started running. It is entirely unpredictable when the call returns, for all you know the thread might have already completed. If the thread causes any interaction in the rest of that process that requires the TID then you've got a problem.
It is not an unsolvable problem if the API doesn't support starting the thread suspended, simply have the thread block on a mutex right away and release that mutex after the CreateThread call returns.
However, there's another use for CREATE_SUSPENDED in the Windows API that is very difficult to deal with if API support is lacking. The CreateProcess() call also accepts this flag, it suspends the startup thread of the process. The mechanism is identical, the process gets loaded and you'll get a PID but no code runs until you release the startup thread. That's very useful, I've used this feature to setup a process guard that detects process failure and creates a minidump. The CREATE_SUSPEND flag allowed me to detect and deal with initialization failures, normally very hard to troubleshoot.
You might want to start a thread with some other (usually lower) priority or with a specific affinity mask. If you spawn it as usual it can run with undesired priority/affinity for some time. So you start it suspended, change the parameters you want, then resume the thread.
The threads we use are able to exchange messages, and we have arbitrarily configurable priority-inherited message queues (described in the config file) that connect those threads. Until every queue has been constructed and connected to every thread, we cannot allow the threads to execute, since they will start sending messages off to nowhere and expect responses. Until every thread was constructed, we cannot construct the queues since they need to attach to something. So, no thread can be allowed to do work until the very last one was configured. We use boost.threads, and the first thing they do is wait on a boost::barrier.
I stumbled with a similar problem once upon I time. The reasons for suspended initial state are treated in other answer.
My solution with pthread was to use a mutex and cond_wait, but I don't know if it is a good solution and if can cover all the possible needs. I don't know, moreover, if the thread can be considered suspended (at the time, I considered "blocked" in the manual as a synonim, but likely it is not so)
I have several thread pools and I want my application to handle a cancel operation.
To do this I implemented a shared operation controller object which I poll at various spots in each thread pool worker function that is called.
Is this a good model, or is there a better way to do it?
I just worry about having all of these operationController.checkState() littered throughout the code.
Yes it's a good approach. Herb Sutter has a nice article comparing it with the alternatives (which are worse).
With any kind of ansynchronous cancellation you're going to have to periodically poll some sort of flag. There's a fundamental issue of having to keep things in a consitant state. If you just kill a thread in the middle of whatever it's doing, bad things will happen sooner or later.
Depending on what you are actually doing, you may be able to just ignore the result of the operation instead of cancelling it. You let the operation continue on, but just don't wait for it to complete and never check the result.
If you actually need to stop the operation, then you're going to have to poll at appropriate points, and do whatever cleanup is necessary.
It's a good way to do it.
Another possible way to do it is, if there's some other subroutine[s] which the threads call regularly anyway, to check within that subroutine and throw an exception (to be caught at the top of the thread), assuming that "cancel" may be considered exceptional and assuming that the code being executed by the thread is exception-safe.
I wouldn't do it that way, checking a shared object.
I most likely will provide each thread object with a way to cancel the execution inside the own thread, be it an event, a threadsafe state variable or whatever.
The problem with the shared operation controller is that, from my point of view, the logic is reversed, Why are you calling it "controller" when it doesn't control anything?
For me, Operation Controller shall recive a cancelation order and then, in turn select the appropiate threads and signal them to stop. That would be a correct "chain of command" if you know what I mean. The way you do it you introduce an unnatural behaivour on the thread wich doesn't "obey" orders to stop, instead if checks each time if his "superior" has "written the order somewere". Somehow it just doesn't feel right.
In addition, what if you just one "some" of the threads to stop in the future? What if you want to include some advanced logic so that threads will only stop given a condition? Then you'll have to rewrite the code in each and every thread to handle that condition.
So I will provide a way, for each thread to be able to handle signals to them, for example by using a Command Pattern with a FIFO structure.
(By the way, I realize they're thread pool workers, not actual Thread Classes but still, I think each worker must be signaled to stop separately, not the other way around).
In similar situations I have used an event, non-auto-reset, all threads can look at that event. Quite similar to polling except that if your threads block at times, they can sleep for the "stop"-event as well. (Easier on Windows.)
/L
I have a general Question about inter-thread communication.
Right now I am using a bunch of C++ thread (~15).
They are all using BusyWait(Polling) each others to get data to process. But it is hard to keep cpu usage low && give good performance and avoiding doing too many context switch.
So I am looking at Condition variable, and signal. I think I understand the general concept of having on thread going into .Wait(), waiting for another thread calling .Signal().
Question #1) My problem might be conceptual, but if the thread waiting for a signal get SUSPENDED while waiting, it is not able to perform any action, by its own. Is there anyway to let it wake up by itself to perform some actions.
Question #2) In addition my class are use to pass data in both directions. But if the middle class is waiting for signal from another class, it is unable to send signal to that class. Such as:
_________ _________ __________
| Class A |---newData Signal--->| Class B |---newData Signal--->| Class C |
| | |(WAITING)|<---newData Signal---| |
--------- --------- ----------
So if Class B is on .Wait() for .Signal() from C, it is unable to process the new signal from A.
Is it possible that both A && C send the same "newData" signal B to wake it up? Would it be possible to differenciate the signal from A && C.
I am coding this using C++ using ACE framework & might switch to Boost. But I guess this is generic enough that I could apply the answer to any OS (hopefully).
Thanks
If you want your parent thread to do work while a child thread is running, you can wait for a signal with a timeout. Every time the timeout expires you do some work and wait again.
Question #1) On most implementations you can limit the maximum wait time and so say: wait for 2 seconds, then do something and wait again.
Question #2) On most implementations you can wait for more than one signal at once. You can say: wake up if signal A or B is triggered.
The answers that you seek are very complex and the space on this wiki isn't really big enough to address them all :(
What you need to do is to find yourself some good web sites that offer explanations of how threading works. Most of what you are after can be done with the correct designs, but you need a much better understanding of the concepts first.
In order to get your communications to work out you need to send signals to the right places and wait on the correct events.
The simplest way to do this which will get you something that works better than polling is to use a single condition variable that all of your threads share. When this condition is signalled they will all wake up and look for some work to do.
This is not efficient, but is simple and will work for you and is more efficient than the polling. Once you have this working you can try to introduce some new condition variables and split which threads wait on which ones -- when doing this you will make many mistakes and experience many deadlocks and starvations. Persevere and you will start to understand how this all works.
Good luck.
Although you can use condition variables for this, the problem description suggests the use of a message queue instead. Then, thread A and thread C can inject messages into B's queue, and B processes them accordingly. (Of course, to differentiate between the two threads, you should arrange for A and C to send different messages.)
I don't know what support for message queues ACE has, however, in (say) the Java concurrency framework, you can build your own poor-man's message queue using ConcurrentLinkedQueue. :-)
Assuming you have some way to do critical section locking (e.g. java synchronized) or a thread safe queue, you could use a run queue.
For each thread, modify/override the sleep implementation so that when the thread waits, it adds itself to the end of the run-queue.
Assuming that only one thread should be running at a time the last thing the currently running thread should do before it itself goes to sleep/waits is wake the thread on the head of the list.
If you need more complex execution/scheduling of the threads, the next step is to create a scheduler thread that walks the queue, adjusting the order of threads in the queue, checking to see which thread has all the resources it needs to run, etc.