Let's say that I have a switch statement in my thread function that evaluates for triggered events. Each case is a different event. Is it better to put the call to ResetEvent at the end of the case, or at the beginning? It seems to me that it should go at the end, so that the event cannot be triggered again, until the thread has finished processing the previous event. IF it is placed at the beginning, the event could be triggered again, while being processed.
Yes. think that is the way to go. Create a manual reset event (second parameter of CreateEvent API) so that event is not automatically reset after setting it.
If you handle incoming traffic using a single Event object (implying you have no inbound queue), you will miss events. Is this really what you want?
If you want to catch all events, a full-blown producer-consumer queue wouold be a better bet. Reference implementation for Boost.Thread here.
One problem that comes up time and
again with multi-threaded code is how
to transfer data from one thread to
another. For example, one common way
to parallelize a serial algorithm is
to split it into independent chunks
and make a pipeline — each stage in
the pipeline can be run on a separate
thread, and each stage adds the data
to the input queue for the next stage
when it's done. For this to work
properly, the input queue needs to be
written so that data can safely be
added by one thread and removed by
another thread without corrupting the
data structure.
Related
I'm creating an async gRPC server in C++. One of the methods streams data from the server to clients - it's used to send data updates to clients. The frequency of the data updates isn't predictable. They could be nearly continuous or as infrequent as once per hour. The model used in the gRPC example with the "CallData" class and the CREATE/PROCESS/FINISH states doesn't seem like it would work very well for that. I've seen an example that shows how to create a 'polling' loop that sleeps for some time and then wakes up to check for new data, but that doesn't seem very efficient.
Is there another way to do this? If I use the "CallData" method can it block in the 'PROCESS' state until there's data (which probably wouldn't be my first choice)? Or better, can I structure my code so I can notify a gRPC handler when data is available?
Any ideas or examples would be appreciated.
In a server-side streaming example, you probably need more states, because you need to track whether there is currently a write already in progress. I would add two states, one called WRITE_PENDING that is used when a write is in progress, and another called WRITABLE that is used when a new message can be sent immediately. When a new message is produced, if you are in state WRITABLE, you can send immediately and go into state WRITE_PENDING, but if you are in state WRITE_PENDING, then the newly produced message needs to go into a queue to be sent after the current write finishes. When a write finishes, if the queue is non-empty, you can grab the next message from the queue and immediately start a write for it; otherwise, you can just go into state WRITABLE and wait for another message to be produced.
There should be no need to block here, and you probably don't want to do that anyway, because it would tie up a thread that should otherwise be polling the completion queue. If all of your threads wind up blocked that way, you will be blind to new events (such as new calls coming in).
An alternative here would be to use the C++ sync API, which is much easier to use. In that case, you can simply write straight-line blocking code. But the cost is that it creates one thread on the server for each in-progress call, so it may not be feasible, depending on the amount of traffic you're handling.
I hope this information is helpful!
I have a program that has a thread that generates Expose messages using XSendEvent. A second thread receives the Expose messages along with other messages (mainly input handling). The problem is that the sending thread sends the Expose messages at a constant rate (~60Hz) but the receiving thread may be rendering slower than that. The X11 queue will get bogged down with extra Expose messages, and any input handling messages will start fall way behind all those extra Expose messages.
In Windows, this is not a problem because Windows will automatically coalesce all WM_PAINT messages into a single message. Is there any way to do this in X11, or some other way to solve this problem?
You can very easily coalesce any kind of event yourself with XCheckTypedEvent() and friends.
I was able to solve this problem as follows:
Block the rendering thread using XPeekEvent.
When an event comes in, read all events into a new queue data structure using a combination of XPending and XNextEvent, but only copy the first expose message.
Then run the event processing loop over the new queue data structure.
This fixed the problem for me, but I think a solution that uses XCheckTypedEvent (per n.m.'s answer here) is probably more elegant.
A few of thing you can do:
If you are doing complete redraw for each event, only action events with a count of 0, count > 1 is the redraw of a particular rectange
If you generate expose events for part of the window, this will reduce the amount of work each expose event does
The constant rate, means you could just process every nth event or keep a time since the last event and ignore events received within a given time
I've been reading The libuv book, however the section on check and prepare watchers is incomplete so the only info i found was in uv.h:
/*
* uv_prepare_t is a subclass of uv_handle_t.
*
* Every active prepare handle gets its callback called exactly once per loop
* iteration, just before the system blocks to wait for completed i/o.
*/
and
/*
* uv_check_t is a subclass of uv_handle_t.
*
* Every active check handle gets its callback called exactly once per loop
* iteration, just after the system returns from blocking.
*/
I was wondering if there's any special usage of libuv's check and prepare watchers.
I'm writing a native node.js binding to a c++ library that needs to handle events fired from different threads, so naturally, the callbacks should be called from the main thread. I tried using uv_async_t, however libuv does not guarantee that the callback will be invoked once per every uv_async_send so this does not work for me.
That's why i decided to go with my own thread-safe event queue which i want to check periodically. So i was wondering whether using a check or prepare watcher will be ok for this purpose.
Actually, my current solution does use an uv_async_t watcher - every time i receive an event, i put it in the queue and call uv_async_send - so when the callback is finally invoked, i handle all events currently in the queue.
My concern with this approach is that many events might actually queue up until the callback is triggered and might get invalidated meanwhile (by invalidated, i mean it's become pointless to handle them at this point).
So i want to be able to check the event queue as frequently as possible - which check/prepare watchers can provide, but maybe it's an overkill to do it (and lock a mutex) on every event loop iteration?
And, more importantly, maybe they are supposed to serve some more special purpose than just securing once-per-loop-iteration callback invocation?
Thanks
You could use a prepare handle to check your queue for events, and a async handle just to wakeup the loop.
If you use only a prepare handle you could en up in the situation where the loop is blocked for i/o and nobody would process the queue until it finishes polling. The async handle would "wakeup" the loop, and the next time prepare handles run you'd process the queue.
I am using SDL for the view parts of my game project. And I want to handle key press events without interrupting the main thread. So I decided to run an infinite loop in another view thread to catch any events and inform the main thread. However, I am not sure that this is the best since this may cause a workload and decrease the system performace? Is there any better way to do this kind of things?
Thanks.
Don't bother with another thread. What's the point?
What does your main thread do? I imagine something like this:
Update Logic
Render
Goto 1
If you receive input after (or during) the update cycle then you have to wait till the next update cycle before you'll see the effects. The same is true during rendering. You might as well just check for input before the update cycle and do it all singlethreaded.
Input
Update Logic
Render
Goto 1
Multithreading gains nothing here and just increases complexity.
For some added reading, check out Christer Ericson's blog post about input latency (he's the director of technology for the team that makes God of War).
And I want to handle key press events without interrupting the main thread.
SDL is not inherently an interrupt or event driven framework. IO occurs by reading events off of the event queue by calling SDL_WaitEvent or SDL_PollEvent. This must occur in the "main" thread, the one that called SDL_SetVideoMode.
That's not to say you cannot use multiple threads, and there's good justification for doing so, for instance, it can simplify network communication if it doesn't have to rely on the SDL event loop. If you want the simulation to occur in a separate thread, then it can pass information back and forth through synchronized shared objects. In particular, you can always put events into the SDL event queue safely from any thread.
I'm programming an online game for two reasons, one to familiarize myself with server/client requests in a realtime environment (as opposed to something like a typical web browser, which is not realtime) and to actually get my hands wet in that area, so I can proceed to actually properly design one.
Anywho, I'm doing this in C++, and I've been using winsock to handle my basic, basic network tests. I obviously want to use a framelimiter and have 3D going and all of that at some point, and my main issue is that when I do a send() or receive(), the program kindly idles there and waits for a response. That would lead to maybe 8 fps on even the best internet connection.
So the obvious solution to me is to take the networking code out of the main process and start it up in its own thread. Ideally, I would call a "send" in my main process which would pass the networking thread a pointer to the message, and then periodically (every frame) check to see if the networking thread had received the reply, or timed out, or what have you. In a perfect world, I would actually have 2 or more networking threads running simultaneously, so that I could say run a chat window and do a background download of a piece of armor and still allow the player to run around all at once.
The bulk of my problem is that this is a new thing to me. I understand the concept of threading, but I can see some serious issues, like what happens if two threads try to read/write the same memory address at the same time, etc. I know that there are already methods in place to handle this sort of thing, so I'm looking for suggestions on the best way to implement something like this. Basically, I need thread A to be able to start a process in thread B by sending a chunk of data, poll thread B's status, and then receive the reply, also as a chunk of data., ideally without any major crashing going on. ^_^ I'll worry about what that data actually contains and how to handle dropped packets, etc later, I just need to get that happening first.
Thanks for any help/advice.
PS: Just thought about this, may make the question simpler. Is there a way to use the windows event handling system to my advantage? Like, would it be possible to have thread A initialize data somewhere, then trigger an event in thread B to have it pick up the data, and vice versa for thread B to tell thread A it was done? That would probably solve a lot of my problems, since I don't really need both threads to be able to work on the data at the same time, more of a baton pass really. I just don't know if this is possible between two different threads. (I know one thread can create its own messages for the event handler.)
The easiest thing
for you to do, would be to simply invoke the windows API QueueUserWorkItem. All you have to specify is the function that the thread will execute and the input passed to it. A thread pool will be automatically created for you and the jobs executed in it. New threads will be created as and when is required.
http://msdn.microsoft.com/en-us/library/ms684957(VS.85).aspx
More Control
You could have a more detailed control using another set of API's which can again manage the thread pool for you -
http://msdn.microsoft.com/en-us/library/ms686980(VS.85).aspx
Do it yourself
If you want to control all aspects of your thread creation and the pool management you would have to create the threads yourself, decide how they should end , how many to create etc (beginthreadex is the api you should be using to create threads. If you use MFC you should use AfxBeginThread function).
Send jobs to worker threads - Io completion Ports
In this case, you would also have to worry about how to communicate your jobs - i would recommend IoCOmpletionPorts to do that. It is the most scalable notification mechanism that i currently know of made for this purpose. It has the additional advantage that it is implemented in the kernel so you avoid all kinds of dead loack sitautions you would encounter if you decide to handroll something yourself.
This article will show you how with code samples -
http://blogs.msdn.com/larryosterman/archive/2004/03/29/101329.aspx
Communicate Back - Windows Messages
You could use windows messages to communicate the status back to your parent thread since it is doing the message wait anyway. use the PostMessage function to do this. (and check for errors)
ps : You could also allocate the data that needs to be sent out on a dedicated pointer and then the worker thread could take care of deleting it after sending it out. That way you avoid the return pointer traffic too.
BlodBath's suggestion of non-blocking sockets is potentially the right approach.
If you're trying to avoid using a multithreaded approach, then you could investigate the use of setting up overlapped I/O on your sockets. They will not block when you do a transmit or receive, but have the added bonus of giving you the option of waiting for multiple events within your single event loop. When your transmit has finished, you will receive an event. (see this for some details)
This is not incompatible with a multithreaded approach, so there's the option of changing your mind later. ;-)
On the design of your multithreaded app. the best thing to do is to work out all of the external activities that you want to be alerted to. For example, so far in your question you've listed network transmits, network receives, and user activity.
Depending on the number of concurrent connections you're going to be dealing with you'll probably find it conceptually simpler to have a thread per socket (assuming small numbers of sockets), where each thread is responsible for all of the processing for that socket.
Then you can implement some form of messaging system between your threads as RC suggested.
Arrange your system so that when a message is sent to a particular thread and event is also sent. Your threads can then be sent to sleep waiting for one of those events. (as well as any other stimulus - like socket events, user events etc.)
You're quite right that you need to be careful of situations where more than one thread is trying to access the same piece of memory. Mutexes and semaphores are the things to use there.
Also be aware of the limitations that your gui has when it comes to multithreading.
Some discussion on the subject can be found in this question.
But the abbreviated version is that most (and Windows is one of these) GUIs don't allow multiple threads to perform GUI operations simultaneously. To get around this problem you can make use of the message pump in your application, by sending custom messages to your gui thread to get it to perform gui operations.
I suggest looking into non-blocking sockets for the quick fix. Using non-blocking sockets send() and recv() do not block, and using the select() function you can get any waiting data every frame.
See it as a producer-consumer problem: when receiving, your network communication thread is the producer whereas the UI thread is the consumer. When sending, it's just the opposite. Implement a simple buffer class which gives you methods like push and pop (pop should be blocking for the network thread and non-blocking for the UI thread).
Rather than using the Windows event system, I would prefer something that is more portable, for example Boost condition variables.
I don't code games, but I've used a system similar to what pukku suggested. It lends nicely to doing things like having the buffer prioritize your messages to be processed if you have such a need.
I think of them as mailboxes per thread. You want to send a packet? Have the ProcessThread create a "thread message" with the payload to go on the wire and "send" it to the NetworkThread (i.e. push it on the NetworkThread's queue/mailbox and signal the condition variable of the NetworkThread so he'll wake up and pull it off). When the NetworkThread receives the response, package it up in a thread message and send it back to the ProcessThread in the same manner. Difference is the ProcessThread won't be blocked on a condition variable, just polling on mailbox.empty( ) when you want to check for the response.
You may want to push and pop directly, but a more convenient way for larger projects is to implement a toThreadName, fromThreadName scheme in a ThreadMsg base class, and a Post Office that threads register their Mailbox with. The PostOffice then has a send(ThreadMsg*); function that gets/pushes the messages to the appropriate Mailbox based on the to and from. Mailbox (the buffer/queue class) contains the ThreadMsg* = receiveMessage(), basically popping it off the underlying queue.
Depending on your needs, you could have ThreadMsg contain a virtual function process(..) that could be overridden accordingly in derived classes, or just have an ordinary ThreadMessage class with a to, from members and a getPayload( ) function to get back the raw data and deal with it directly in the ProcessThread.
Hope this helps.
Some topics you might be interested in:
mutex: A mutex allows you to lock access to specific resources for one thread only
semaphore: A way to determine how many users a certain resource still has (=how many threads are accessing it) and a way for threads to access a resource. A mutex is a special case of a semaphore.
critical section: a mutex-protected piece of code (street with only one lane) that can only be travelled by one thread at a time.
message queue: a way of distributing messages in a centralized queue
inter-process communication (IPC) - a way of threads and processes to communicate with each other through named pipes, shared memory and many other ways (it's more of a concept than a special technique)
All topics in bold print can be easily looked up on a search engine.