I have a main process that uses a single thread library and I can only the library functions from the main process. I have a thread spawned by the parent process that puts info it receives from the network into a queue.
I need to able to tell the main process that something is on the queue. Then it can access the queue and process the objects. The thread cannot process those objects because the library can only be called by one process.
I guess I need to use pipes and signals. I also read from various newsgroups that I need to use a 'self-trick' pipe.
How should this scenario be implemented?
A more specific case of the following post:
How can unix pipes be used between main process and thread?
Why not use a simple FIFO (named pipe)? The main process will automatically block until it can read something.
If it shouldn't block, it must be possible to poll instead, but maybe it will suck CPU. There probably exists an efficient library for this purpose.
I wouldn't recommend using signals because they are easy to get wrong. If you want to use them anyway, the easiest way I've found is:
Mask all signals in every thread,
A special thread handles signals with sigwait(). It may have to wake up another thread which will handle the signal, e.g. using condition variables.
The advantage is that you don't have to worry anymore about which function is safe to call from the handler.
The "optimal" solution depends quite a bit on your concrete setup. Do you have one process with a main thread and a child thread or do you have one parent process and a child process? Which OS and which thread library do you use?
The reason for the last question is that the current C++03 standard has no notion of a 'thread'. This means in particular that whatever solution your OS and your thread library offer are platform specific. The most portable solutions will only hide these specifics from you in their implementation.
In particular, C++ has no notion of threads in its memory model, nor does it have a notion of atomic operations, synchronization, ordered memory accesses, race conditions etc.
Chances are, however, that whatever library you are using already provides a solution for your problem on your platform.
I highly suggest you used a thread-safe queue such as this one (article and source code). I have personally used it and it's very simple to use. The API consist in simple methods such as push(), try_pop(), wait_and_pop() and empty().
Note that it is based on Boost.Thread.
Related
I'm working on a multi-thread scheduling assignment, which involves adding threads to a variety of queues and selecting the appropriate one to execute.
The pthread_cond_signal(&condition) command is completely asynchronous from what I can tell; it's simply thrown into memory and the first thread to find it with the appropriate pthread_cond_wait() will consume it.
However, say I have a vector of thread ids that have been pushed as the thread is created, ie:
threadIDVector1[0] = 3061099328
threadIDVector1[1] = 3077884736
...
threadIDVector2[0] = 3294747394
threadIDVector2[1] = 3384567393
...
etc.
And I wanted to send a signal specifically to the thread with an id that matches the appropriate element of a vector. I.e. the algorithm would be:
While (at least one threadVector is non-empty):
Look at the first element in each vector
Select the appropriate one to signal by some criteria
Send a signal to ONLY that thread
Complete the thread and remove from threadIDVectorX
Is there some way to execute the above, or some accepted standard for achieving the same result?
There is no way to "send" a signal to a specific thread, nor to know which thread among many will be woken by the OS. It is entirely non-deterministic.
You could use the "multiple condition variable" solution as proposed in the comments. But my preferred solution to something like this is a pipe or socket pair. Have the thread doing the waking write something (like a single byte) to the pipe for the corresponding thread to signal it.
This has a lot of benefits in my book. First, it allows bidirectional communication. Your pseudocode loop at the end of your question seems to also want to remove a finished thread from the list, so you need to know when that thread is done. You could have another CV, or you could have the completing thread write a single byte back to the manager object before exiting. Much easier, I feel.
It also allows you to choose between blocking or nonblocking I/O, or to use synchronous multiplexing with select(2) or epoll(2). If you were not exiting from the worker threads, but instead wanted to reuse them, the notifying thread would need to know when they're ready for more work. Again, a CV would be fine here, but the file-descriptor approach allows the notifier to wait for all of the worker threads in a single select(2) call.
The last thing is that I find files simpler. pthreads are pretty complicated, and multithreading is already hard enough to get right. I find that files are easier to manage and reason about in a multithreaded context, making it easier to avoid locking or crashes.
I am trying to use the multithreading features in the C++11 standard library and have the following situation envisioned.
I have a parent class which maintains a queue of thread. So something like:
std::queue<MyMTObject *> _my_threads;
The class MyMTObject contains the std::thread object.
The queue has a fixed size of 5 and the class initially starts with the queue being full.
As I have jobs to process I launch threads and I remove them from the queue. What I would like is to get a notification when the job is finished along with the pointer to the MyMTObject, so that I can reinsert them into the queue and make them available again.
I have basically 2 questions:
1: Is this a sound idea? I know I have not specified specifics but broadly speaking. I will, of course, control all access to the queue with a mutex.
2: Is there a way to implement this notification mechanism without using external libraries like Qt or boost.
For duplicates, I did look on the site but could not find anything that was suitable to manage a collection of threads.
I'm not sure if I need to mention this, but std::thread objects can't be re-used. Generally, the only reason you keep a std::thread reference is to std::thread::join the thread. If you don't plan to join the thread later (e.g. dispatch to threads and wait for completion), it's generally advised to std::thread::detach it.
If you're trying to keep threads for a thread pool, it's probably easier to have each thread block on the std::queue and pull objects from the queue to work on. This is relatively easy to implement using a std::mutex and a std::condition_variable. It generally gives good throughput, but to get finer control over scheduling you can do things like keep a seperate std::queue for each thread.
Detaching the threads and creating a work queue also has the added benefit that it avoids redundantly requesting the operating system create new threads which adds overhead and increases overall resource usage.
You could try to deploy some version of Reactor pattern I think. So, you could start one additional control thread that cleans after these workers. Now, you create a ThreadSafeQueue that will be used to communicate events from worker threads to control thread. This queue should be implemented in such a way that you can select on it and wait for any activity on the other end (some thread terminates and calls queue.push for example).
All in all I think it's quite elegant solution. I does add an overhead of an additional thread, but this thread will be mostly sleeping and waking up only once a while to clean up after the worker.
There is no elegant way to do this in Posix, and C++ threading model is almost a thin wrapper on Posix.
You can join a specific thread (one at a time), or you can wait on futures - again, one future at a time.
The best you can do to avoid looping is to employ a conditional variable, and make all threads singal on it (as well as indicating which one just exited by setting some sort of per-thread flag) just before they are about to exit. The 'reaper' would notice the signal and check the flags.
The issue is that this solution requires thread cooperation. But I know not of any better.
I am implementing a function in library which takes a while (up to a minute). It initialize a device. Now generally any long function should run in its own thread and report to main thread when it completes but I am not sure since this function is in library.
My dilemma is this, even if I implement this in a separate thread, another thread in the application has to wait on it. If so why not let the application run this function in that thread anyways?
I could pass queue or mailbox to the library function but I would prefer a simpler mechanism where the library can be used in VB, VC, C# or other windows platforms.
Alternatively I could pass HWND of the window and the library function can post message to it when it completes instead of signaling any event. That seems like most practical approach if I have to implement the function in its own thread. Is this reasonable?
Currently my function prototype is:
void InitDevice(HANDLE hWait)
When initialization is complete than I signal bWait. This works fine but I am not convinced I should use thread anyways when another secondary thread will have to wait on InitDevice. Should I pass HWNDinstead? That way the message will be posted to the primary thread and it will make better sense with multithreading.
In general, when I write library code, I normally try to stay away from creating threads unless it's really necessary. By creating a thread, you're forcing a particular threading model on the application. Perhaps they wish to use it from a very simplistic command-line tool where a single thread is fine. Or they could use it from a GUI tool where things must be multi-threaded.
So, instead, just give the library user understanding that a function call is a long-term blocking call, some callback mechanism to monitor the progress, and finally a way to immediately halt the operation which could be used by a multi-threaded application.
What you do want to claim is being thread safe. Use mutexes to protect data items if there are other functions they can call to affect the operation of the blocking function.
In our application, there is a heavy use of win32 HANDLEs, using CreateEvent, SetEvent/ResetEvent, so as to perform synchronization mechanisms.
A colleague of mine has asked me if accessing the HANDLEs for events was thread-safe.
I could not answer, since HANDLEs are not thread safe for any GDI object...
But since events are aimed towards multithread synchronization, I could not imagine they arent thread safe.
Could you confirm this ?
All handles you obtain from functions in Kernel32 are thread-safe, unless the MSDN Library article for the function explicitly mentions it is not. There's an easy way to tell from your code, such a handle is closed with CloseHandle().
What you do with the handle may not necessarily be thread safe, Windows won't help when you call SetEvent() twice but WaitForSingleObject() only once. Which might be a threading race in your program, depending on how you use the event.
Depends on the type of handle.
A synchronization handle (like one created by CreateEvent) is by definition thread safe.
A file handle, when written to by multiple threads simultaneously, not so much.
I read at multiple places that Boost.Signals is not threadsafe but I haven't found much more details about it. This simple quote doesn't say really that much. Most applications nowadays have threads - even if they try to be single threaded, some of their libraries may use threads (for example libsdl).
I guess the implementation doesn't have problems with other threads not accessing the slot. So it is at least threadsafe in this sense.
But what exactly works and what would not work? Would it work to use it from multiple threads as long as I don't ever access it at the same time? I.e. if I build my own mutexes around the slot?
Or am I forced to use the slot only in that thread where I created it? Or where I used it for the first time?
I don't think it's too clear either, and one of the library reviewers said here:
I also don't liked the fact that only three times the word 'thread' was named.
Boost.signals2 wants to be a 'thread safe signals' library. Therefore some more
details and especially more examples concerning on that area should be given to
the user.
One way of figuring it out is to go to the source and see what they're using _mutex / lock() to protect. Then just imagine what would happen if those calls weren't there. :)
From what I can gather, it's ensuring simple things like "if one thread is doing connects or disconnects, that won't cause a different thread which is iterating through the slots attached to those signals to crash". Kind of like how using a thread-safe version of the C runtime library assures that if two threads make valid calls to printf at the same time then there won't be a crash. (Not to say the output you'll get will make any sense—you're still responsible for the higher order semantics.)
It doesn't seem to be like Qt, in which the thread a certain slot's code gets run on is based on the target slot's "thread affinity" (which means emitting a signal can trigger slots on many different threads to run in parallel.) But I guess not supporting that is why the boost::signal "combiners" can do things like this.
One problem I see is that one thread can connect or disconnect while another thread is signalling.
You can easily wrap your signal and connect calls with mutexes. However, it is non-trivial to wrap the connections. (connect returns connections which you can use to disconnect).