I've been looking into OpenGL programming as a C++ programmer, and have seen two primary ways of dealing with event-driven programming: message polling or callback functions.
I see that the native Win32API uses a callback function, which is triggered by the DispatchMessage function.
SDL (based on the tutorials) also use some sort of callback or callback-like programming.
GLFW also uses callbacks.
SFML lets the programmer poll for individual messages anywhere in the code, usually in a loop, forming the message loop.
The X Window system, based on what I have seen, also uses message polling.
Clearly, since event systems exist in prominent environments, each must have an advantage. I was hoping someone could tell me the advantages and disadvantages of each. I am thinking of writing some programs which would heavily depend on event-driven programming, and would like to make the best decision on which path to take.
This isn't going to be complete, but here's a few things that come to mind...
I've only ever used GL for 3D, and haven't really done much in the way of GUIs. Polling for events is pretty common. More precisely, polling in a main rendering loop which processes all events in a queue and then moves on to rendering. This is because you re-render everything from scratch each frame after collecting all events and using them to update the scene's 3D state. Since a screen can only display images at a limited frame rate, it's also common to sleep during polling as any state updates won't get shown till later even if their events are triggered sooner.
If you were to process events exactly as they happen, such as part-way through drawing, then you have race conditions. Dealing with this may be an unnecessary hastle.
If you have anything animating then you already have a loop and polling is a trivial cost in contrast.
If your events are very infrequent, then you don't need to re-draw often so having a thread active and polling is a little in-efficient.
It'll be quite bad if events pile up and you're re-drawing for each one. You might find you're re-drawing more often than using a loop to process all events and render once.
I think the main issue with polling is for inactive windows that aren't in focus. Lets say you minimize your GL app. You know it won't receive any events so polling is useless. So is drawing for that matter.
Another issue is response latency. This is quite important for something like capturing mouse movement in a game. As long as you poll for events in the right order (input→update→display) this is generally OK. However, vsync can mess with the timing by delaying frames from being displayed.
I'm currently creating light weight GUI libraries for linux based on opengl and evdev.
A first one, developped in C leads me to implement message architecture, inspired by pipe usage for multithreaded communication.
For a second one, in c++, I only use callbacks, but evdev stack in linux is message driven.
My conclusion is that for peripherics (ex: mouse) which could trigger interrupts more rapidly than program could respond to, you need a fifo layer, (usualy a pipe), to make asynchronous the communication between both contexts. and thus: Messages are just asynchronous buffered callback's in multithreaded environment.
You may also use callback fifo's, to buffer your events. But organizing variables among threads is not always easy (semaphore, locking, etc). Using messages as the only interprocess syncronization mechanic helps clearing that point.
Related
I can't seem to find a good answer to this:
I'm making a game, and I want the logic loop to be separate from the graphics loop. In other words I want the game to go through a loop every X milliseconds regardless of how many frames/second it is displaying.
Obviously they will both be sharing a lot of variables, so I can't have a thread/timer passing one variable back and forth... I'm basically just looking for a way to have a timer in the background that every X milliseconds sends out a flag to execute the logic loop, regardless of where the graphics loop is.
I'm open to any suggestions. It seems like the best option is to have 2 threads, but I'm not sure what the best way to communicate between them is, without constantly synchronizing large amounts of data.
You can very well do multithreading by having your "world view" exchanged every tick. So here is how it works:
Your current world view is pointed to by a single smart pointer and is read only, so no locking is necessary.
Your logic creates your (first) world view, publishes it and schedules the renderer.
Your renderer grabs a copy of the pointer to your world view and renders it (remember, read-only)
In the meantime, your logic creates a new, slightly different world view.
When it's done it exchanges the pointer to the current world view, publishing it as the current one.
Even if the renderer is still busy with the old world view there is no locking necessary.
Eventually the renderer finishes rendering the (old) world. It grabs the new world view and starts another run.
In the meantime, ... (goto step 4)
The only locking you need is for the time when you publish or grab the pointer to the world. As an alternative you can do atomic exchange but then you have to make sure you use smart pointers that can do that.
Most toolkits have an event loop (built above some multiplexing syscall like poll(2) -or the obsolete select-...), e.g. GTK has g_application_run (which is above:) gtk_main which is built above Glib main event loop (which in fact does a poll or something similar). Likewise, Qt has QApplication and its exec methods.
Very often, you can register timers within the event loop. For GTK, use GTimers, g_timeout_add etc. For Qt learn about its timers.
Very often, you can also register some idle or background processing, which is one of your function which is started by the event loop after other events and timeouts have been processed. Your idle function is expected to run quickly (usually it does a small step of some computation in a few milliseconds, to keep the GUI responsive). For GTK, use g_idle_add etc. IIRC, in Qt you can use a timer with a 0 delay.
So you could code even a (conceptually) single threaded application, using timeouts and idle processing.
Of course, you could use multi-threading: generally the main thread is running the event loop, and other threads can do other things. You have synchronization issues. On POSIX systems, a nice synchronization trick could be to use a pipe(7) to self: you set up a pipe before running the event loop, and your computation threads may write a few bytes on it, while the main event loop is "listening" on it (with GTK, using g_source_add_poll or async IO or GUnixInputStream etc.., with Qt, using QSocketNotifier etc....). Then, in the input handler running in the main loop for that pipe, you could access traditional global data with mutexes etc...
Conceptually, read about continuations. It is a relevant notion.
You could have a Draw and Update Method attached to all your game components. That way you can set it that while your game is running the update is called and the draw is ignored or any combination of the two. It also has the benefit of keeping logic and graphics completely separate.
Couldn't you just have a draw method for each object that needs to be drawn and make them globals. Then just run your rendering thread with a sleep delay in it. As long as your rendering thread doesn't write any information to the globals you should be fine. Look up sfml to see an example of it in action.
If you are running on a unix system you could use usleep() however that is not available on windows so you might want to look here for alternatives.
So I was reading this article which contains 'Tips and Advice for Multithreaded Programming in SDL' - https://vilimpoc.org/research/portmonitorg/sdl-tips-and-tricks.html
It talks about SDL_PollEvent being inefficient as it can cause excessive CPU usage and so recommends using SDL_WaitEvent instead.
It shows an example of both loops but I can't see how this would work with a game loop. Is it the case that SDL_WaitEvent should only be used by things which don't require constant updates ie if you had a game running you would perform game logic each frame.
The only things I can think it could be used for are programs like a paint program where there is only action required on user input.
Am I correct in thinking I should continue to use SDL_PollEvent for generic game programming?
If your game only updates/repaints on user input, then you could use SDL_WaitEvent. However, most games have animation/physics going on even when there is no user input. So I think SDL_PollEvent would be best for most games.
One case in which SDL_WaitEvent might be useful is if you have it in one thread and your animation/logic on another thread. That way even if SDL_WaitEvent waits for a long time, your game will continue painting/updating. (EDIT: This may not actually work. See Henrik's comment below)
As for SDL_PollEvent using 100% CPU as the article indicated, you could mitigate that by adding a sleep in your loop when you detect that your game is running more than the required frames-per-second.
If you don't need sub-frame precision in your input, and your game is constantly animating, then SDL_PollEvent is appropriate.
Sub-frame precision can be important for, eg. games where the player might want very small increments in movement - quickly tapping and releasing a key has unpredictable behavior if you use the classic lazy method of keydown to mean "velocity = 1" and keyup to mean "velocity = 0" and then you only update position once per frame. If your tap happens to overlap with the frame render then you get one frame-duration of movement, if it does not you get no movement, where what you really want is an amount of movement smaller than the length of a frame based on the timestamps at which the events occurred.
Unfortunately SDL's events don't include the actual event timestamps from the operating system, only the timestamp of the PumpEvents call, and WaitEvent effectively polls at 10ms intervals, so even with WaitEvent running in a separate thread, the most precision you'll get is 10ms (you could maybe approximate smaller by saying if you get a keydown and keyup in the same poll cycle then it's ~5ms).
So if you really want precision timing on your input, you might actually need to write your own version of SDL_WaitEventTimeout with a smaller SDL_Delay, and run that in a separate thread from your main game loop.
Further unfortunately, SDL_PumpEvents must be run on the thread that initialized the video subsystem (per https://wiki.libsdl.org/SDL_PumpEvents ), so the whole idea of running your input loop on another thread to get sub-frame timing is nixed by the SDL framework.
In conclusion, for SDL applications with animation there is no reason to use anything other than SDL_PollEvents. The best you can do for sub-framerate input precision is, if you have time to burn between frames, you have the option of being precise during that time, but then you'll get weird render-duration windows each frame where your input loses precision, so you end up with a different kind of inconsistency.
In general, you should use SDL_WaitEvent rather than SDL_PollEvent to release the CPU to the operating system to handle other tasks, like processing user input. This will manifest to you users as sluggish reaction to user input, since this can cause a delay between when they enter a command and when your application processes the event. By using SDL_WaitEvent instead, the OS can post events to your application more quickly, which improves the perceived performance.
As a side benefit, users on battery powered systems, like laptops and portable devices should see slightly less battery usage since the OS has the opportunity to reduce overall CPU usage since your game isn't using it 100% of the time - it would only be using it when an event actually occurs.
This is a very late response, I know. But this is the thread that tops a Google search on this, so it seems the place to add an alternative suggestion to dealing with this that some might find useful.
You could write your code using SDL_WaitEvent, so that, when your application is not actively animating anything, it'll block and hand the CPU back to the OS.
But then you can send a user-defined message to the queue, from another thread (e.g. the game logic thread), to wake up the main rendering thread with that message. And then it goes through the loop to render a frame, swap and returns back to SDL_WaitEvent again. Where another of these user-defined messages can be waiting to be picked up, to tell it to loop once more.
This sort of structure might be good for an application (or game) where there's a "burst" of animation, but otherwise it's best for it to block and go idle (and save battery on laptops).
For example, a GUI where it animates when you open or close or move windows or hover over buttons, but it's otherwise static content most of the time.
(Or, for a game, though it's animating all the time in-game, it might not need to do that for the pause screen or the game menus. So, you could send the "SDL_ANIMATEEVENT" user-defined message during gameplay, but then, in the game menus and pause screen, just wait for mouse / keyboard events and actually allow the CPU to idle and cool down.)
Indeed, you could have self-triggering animation events. In that the rendering thread is woken up by a "SDL_ANIMATEEVENT" and then one more frame of animation is done. But because the animation is not complete, the rendering thread itself posts a "SDL_ANIMATEEVENT" to its own queue, that'll trigger it to wake up again, when it reaches SDL_WaitEvent.
And another idea there is that SDL events can carry data too. So you could supply, say, an animation ID in "data1" and a "current frame" counter in "data2" with the event. So that when the thread picks up the "SDL_ANIMATEEVENT", the event itself tells it which animation to do and what frame we're currently on.
This is a "best of both worlds" solution, I feel. It can behave like SDL_WaitEvent or SDL_PollEvent at the application's discretion by just sending messages to itself.
For a game, this might not be worth it, as you're updating frames constantly, so there's no big advantage to this and maybe it's not worth bothering with (though even games could benefit from going to 0% CPU usage in the pause screen or in-game menus, to let the CPU cool down and use less laptop battery).
But for something like a GUI - which has more "burst-y" animation - then a mouse event can trigger an animation (e.g. opening a new window, which zooms or slides into view) that sends "SDL_ANIMATEEVENT" back to the queue. And it keeps doing that until the animation is complete, then falls back to normal SDL_WaitEvent behaviour again.
It's an idea that might fit what some people need, so I thought I'd float it here for general consumption.
You could actually initialise the SDL and the window in the main thread and then create 2 more threads for updates(Just updates game states and variables as time passes) and rendering(renders the surfaces accordingly).
Then after all that is done, use SDL_WaitEvent in your main thread to manage SDL_Events. This way you could ensure that event is managed in the same thread that called the sdl_init.
I have been using this method for long to make my games work in windows and linux and have been able to successfully run 3 threads at the same time as mentioned above.
I had to use mutex to make sure that textures/surfaces can be transformed/changed in the update thread as well by pausing the render thread, and the lock is called every once 60 frames, so its not going to cause major perf issues.
This model works best to create event driven games, run time games, or both.
I am writing a windows DLL in C++ and I need some sort of event loop.
Several events are periodically checked and the DLL then triggers callbacks in the client application. The polling period might differ for different events.
I wonder what is the best way to code this. The timing precision must be good but performance is the main priority.
Is it better to have several (4-5) timers in one Windows timer queue with various periods or one single timer with the smallest period? Or should I prefer a completely different solution?
Thanks
What sort of performance is the main priority? Repeatable, accurate timing is err... 'highly difficult' on a general-purpose OS like Windows because of the numerous driver interrupts. There are various timer mechanisms that can generate very accurate timeouts but not reliably. How much jitter can you tolerate? In what sort of range are your intervals? Do the callbacks have to be in any particular thread context, or could anything fire the events?
Rgds,
Martin
To directly answer the question you can set up a timer with SetTimer which takes a nIDEvent which is usually registered to a window handle. That window will then receive WM_TIMER messages with the registered identifier as wParam. You should destroy the timer when you're done with it with KillTimer. WM_TIMER has a granularity of around 10-15ms on Windows NT systems.
As said by the other posters, you would probably be better off with a truly event based system. Perhaps you could edit your post with the usecase and better answers could be suggested.
I have a Snake game in-development (up at https://github.com/RobotGymnast/Gingerbread/tree/eventThreaded). Initially, everything (graphics, events, game logic update, physics) were called from a "main" thread. Then I started multithreading (using boost threads). It's been pretty straightforward, but I recently split the graphics display logic into a new thread, which allocated the screen object in its local stack space. Then I split my event-detection and event-handling logic into a new thread. Then my screen stopped appearing. Judging by my command-line output, everything still worked fine, just the screen stopped appearing. It turned out it was hanging on my SDL_SetVideoMode() call.
I fixed this by allocating my screen object in the "main" thread, and passing in a reference to the graphics thread. For some reason, allocating the screen object in a new thread from the event logic was creating problems.
Since this fix, the event-detection and event-handling no longer works. The event checks are still being made, e.g. SDL_PollEvent(), but they're not picking up any events at all (keyboard, mouse, etc.).
My suspicion is that SDL might do some behind-the-scenes thread syncing, but I've been using boost threads. Could this be a problem? SDL threads are rather restrictive, and I'd rather not switch.
Anybody had this issue before? Any recommendations?
I'm not sure about SDL, but on several windowing subsystems (I believe on both X and Win32), you cannot modify ANYTHING related to a graphics object or widget, except from the thread which initially created that graphics object/widget.
It doesn't look like (to my limited 10 second google search) SDL abstracts that bit from you -- you'd need to only modify graphics related objects from the thread that created them. To do otherwise is to invite strange behavior.
Graphics display logic should almost always be in the main thread, due to some technical considerations on various platforms.
Similarly, the event handling (at least at the low level) should be in the main thread, as events can be posted to specific threads rather than processes.
For the most part I would recommend not calling any SDL functions from anything other than the main thread apart from ones that don't operate on shared state.
I am using SDL for the view parts of my game project. And I want to handle key press events without interrupting the main thread. So I decided to run an infinite loop in another view thread to catch any events and inform the main thread. However, I am not sure that this is the best since this may cause a workload and decrease the system performace? Is there any better way to do this kind of things?
Thanks.
Don't bother with another thread. What's the point?
What does your main thread do? I imagine something like this:
Update Logic
Render
Goto 1
If you receive input after (or during) the update cycle then you have to wait till the next update cycle before you'll see the effects. The same is true during rendering. You might as well just check for input before the update cycle and do it all singlethreaded.
Input
Update Logic
Render
Goto 1
Multithreading gains nothing here and just increases complexity.
For some added reading, check out Christer Ericson's blog post about input latency (he's the director of technology for the team that makes God of War).
And I want to handle key press events without interrupting the main thread.
SDL is not inherently an interrupt or event driven framework. IO occurs by reading events off of the event queue by calling SDL_WaitEvent or SDL_PollEvent. This must occur in the "main" thread, the one that called SDL_SetVideoMode.
That's not to say you cannot use multiple threads, and there's good justification for doing so, for instance, it can simplify network communication if it doesn't have to rely on the SDL event loop. If you want the simulation to occur in a separate thread, then it can pass information back and forth through synchronized shared objects. In particular, you can always put events into the SDL event queue safely from any thread.