I have a Snake game in-development (up at https://github.com/RobotGymnast/Gingerbread/tree/eventThreaded). Initially, everything (graphics, events, game logic update, physics) were called from a "main" thread. Then I started multithreading (using boost threads). It's been pretty straightforward, but I recently split the graphics display logic into a new thread, which allocated the screen object in its local stack space. Then I split my event-detection and event-handling logic into a new thread. Then my screen stopped appearing. Judging by my command-line output, everything still worked fine, just the screen stopped appearing. It turned out it was hanging on my SDL_SetVideoMode() call.
I fixed this by allocating my screen object in the "main" thread, and passing in a reference to the graphics thread. For some reason, allocating the screen object in a new thread from the event logic was creating problems.
Since this fix, the event-detection and event-handling no longer works. The event checks are still being made, e.g. SDL_PollEvent(), but they're not picking up any events at all (keyboard, mouse, etc.).
My suspicion is that SDL might do some behind-the-scenes thread syncing, but I've been using boost threads. Could this be a problem? SDL threads are rather restrictive, and I'd rather not switch.
Anybody had this issue before? Any recommendations?
I'm not sure about SDL, but on several windowing subsystems (I believe on both X and Win32), you cannot modify ANYTHING related to a graphics object or widget, except from the thread which initially created that graphics object/widget.
It doesn't look like (to my limited 10 second google search) SDL abstracts that bit from you -- you'd need to only modify graphics related objects from the thread that created them. To do otherwise is to invite strange behavior.
Graphics display logic should almost always be in the main thread, due to some technical considerations on various platforms.
Similarly, the event handling (at least at the low level) should be in the main thread, as events can be posted to specific threads rather than processes.
For the most part I would recommend not calling any SDL functions from anything other than the main thread apart from ones that don't operate on shared state.
Related
I'm working on a small framework for cross platform applications (sure, for academic purposes) and it needs to handle non-UI and UI programs gracefully. I'm currently investigating if having an OS message/event loop per window is possible. At this point I don't really care if it's a good idea, just if it's at all possible.
Now, on Windows, although I'd call WinMain, a non-UI (uh, windowless) program can still work through a messageloop. In this case, the main thread handles all messages that have their HWND parameter set to nullptr (the messages not sent to a specific window), and each window lives in its own thread, with its own wndproc called in its own message loop.
I believe a similar setup should be possible in X11, using a xcb_connection_t per thread (and thus per window). Assuming this is correct, what would be a good way of handling the "main" non-UI thread? If it does nothing, the program will exit immediately, regardless of how many other threads are running their X event loops. I'd like to keep this part X free. I've read about pause, sigsuspend etc. But I'm not sure these are the best fit for this somewhat simple purpose.
I can't seem to find a good answer to this:
I'm making a game, and I want the logic loop to be separate from the graphics loop. In other words I want the game to go through a loop every X milliseconds regardless of how many frames/second it is displaying.
Obviously they will both be sharing a lot of variables, so I can't have a thread/timer passing one variable back and forth... I'm basically just looking for a way to have a timer in the background that every X milliseconds sends out a flag to execute the logic loop, regardless of where the graphics loop is.
I'm open to any suggestions. It seems like the best option is to have 2 threads, but I'm not sure what the best way to communicate between them is, without constantly synchronizing large amounts of data.
You can very well do multithreading by having your "world view" exchanged every tick. So here is how it works:
Your current world view is pointed to by a single smart pointer and is read only, so no locking is necessary.
Your logic creates your (first) world view, publishes it and schedules the renderer.
Your renderer grabs a copy of the pointer to your world view and renders it (remember, read-only)
In the meantime, your logic creates a new, slightly different world view.
When it's done it exchanges the pointer to the current world view, publishing it as the current one.
Even if the renderer is still busy with the old world view there is no locking necessary.
Eventually the renderer finishes rendering the (old) world. It grabs the new world view and starts another run.
In the meantime, ... (goto step 4)
The only locking you need is for the time when you publish or grab the pointer to the world. As an alternative you can do atomic exchange but then you have to make sure you use smart pointers that can do that.
Most toolkits have an event loop (built above some multiplexing syscall like poll(2) -or the obsolete select-...), e.g. GTK has g_application_run (which is above:) gtk_main which is built above Glib main event loop (which in fact does a poll or something similar). Likewise, Qt has QApplication and its exec methods.
Very often, you can register timers within the event loop. For GTK, use GTimers, g_timeout_add etc. For Qt learn about its timers.
Very often, you can also register some idle or background processing, which is one of your function which is started by the event loop after other events and timeouts have been processed. Your idle function is expected to run quickly (usually it does a small step of some computation in a few milliseconds, to keep the GUI responsive). For GTK, use g_idle_add etc. IIRC, in Qt you can use a timer with a 0 delay.
So you could code even a (conceptually) single threaded application, using timeouts and idle processing.
Of course, you could use multi-threading: generally the main thread is running the event loop, and other threads can do other things. You have synchronization issues. On POSIX systems, a nice synchronization trick could be to use a pipe(7) to self: you set up a pipe before running the event loop, and your computation threads may write a few bytes on it, while the main event loop is "listening" on it (with GTK, using g_source_add_poll or async IO or GUnixInputStream etc.., with Qt, using QSocketNotifier etc....). Then, in the input handler running in the main loop for that pipe, you could access traditional global data with mutexes etc...
Conceptually, read about continuations. It is a relevant notion.
You could have a Draw and Update Method attached to all your game components. That way you can set it that while your game is running the update is called and the draw is ignored or any combination of the two. It also has the benefit of keeping logic and graphics completely separate.
Couldn't you just have a draw method for each object that needs to be drawn and make them globals. Then just run your rendering thread with a sleep delay in it. As long as your rendering thread doesn't write any information to the globals you should be fine. Look up sfml to see an example of it in action.
If you are running on a unix system you could use usleep() however that is not available on windows so you might want to look here for alternatives.
I've been looking into OpenGL programming as a C++ programmer, and have seen two primary ways of dealing with event-driven programming: message polling or callback functions.
I see that the native Win32API uses a callback function, which is triggered by the DispatchMessage function.
SDL (based on the tutorials) also use some sort of callback or callback-like programming.
GLFW also uses callbacks.
SFML lets the programmer poll for individual messages anywhere in the code, usually in a loop, forming the message loop.
The X Window system, based on what I have seen, also uses message polling.
Clearly, since event systems exist in prominent environments, each must have an advantage. I was hoping someone could tell me the advantages and disadvantages of each. I am thinking of writing some programs which would heavily depend on event-driven programming, and would like to make the best decision on which path to take.
This isn't going to be complete, but here's a few things that come to mind...
I've only ever used GL for 3D, and haven't really done much in the way of GUIs. Polling for events is pretty common. More precisely, polling in a main rendering loop which processes all events in a queue and then moves on to rendering. This is because you re-render everything from scratch each frame after collecting all events and using them to update the scene's 3D state. Since a screen can only display images at a limited frame rate, it's also common to sleep during polling as any state updates won't get shown till later even if their events are triggered sooner.
If you were to process events exactly as they happen, such as part-way through drawing, then you have race conditions. Dealing with this may be an unnecessary hastle.
If you have anything animating then you already have a loop and polling is a trivial cost in contrast.
If your events are very infrequent, then you don't need to re-draw often so having a thread active and polling is a little in-efficient.
It'll be quite bad if events pile up and you're re-drawing for each one. You might find you're re-drawing more often than using a loop to process all events and render once.
I think the main issue with polling is for inactive windows that aren't in focus. Lets say you minimize your GL app. You know it won't receive any events so polling is useless. So is drawing for that matter.
Another issue is response latency. This is quite important for something like capturing mouse movement in a game. As long as you poll for events in the right order (input→update→display) this is generally OK. However, vsync can mess with the timing by delaying frames from being displayed.
I'm currently creating light weight GUI libraries for linux based on opengl and evdev.
A first one, developped in C leads me to implement message architecture, inspired by pipe usage for multithreaded communication.
For a second one, in c++, I only use callbacks, but evdev stack in linux is message driven.
My conclusion is that for peripherics (ex: mouse) which could trigger interrupts more rapidly than program could respond to, you need a fifo layer, (usualy a pipe), to make asynchronous the communication between both contexts. and thus: Messages are just asynchronous buffered callback's in multithreaded environment.
You may also use callback fifo's, to buffer your events. But organizing variables among threads is not always easy (semaphore, locking, etc). Using messages as the only interprocess syncronization mechanic helps clearing that point.
Sorry I don't know how to phrase this in the title, maybe someone could help me.
I am starting to make a Qt application, let's say, the application will first show N points on the screen. Then we have a function now, called movePoints, when it is called, these points will be moved according to some algorithms.
Now when N is small, everything looks very smooth, it works very well without any problem. But if N is very large, the whole GUI sucks because movePoints is running. So whenever I touch the application window now, it becomes unresponding. But I know lots of programs seem to be able to let the movePoints function run in the back-end (with a progress bar in the status bar or something) without slowing down the main application. How can I achieve this effect?
To keep your application responsive to user interactions, you should use the processEvents function. (http://qt-project.org/doc/qt-4.8/qcoreapplication.html#processEvents)
If you'd rather have the operation occur in the background you can use the QtConcurrent module and use the asynchronous run function (http://qt-project.org/doc/qt-4.8/qtconcurrentrun.html).
Use a QTimer for an interrupt or a QThread to bring the calculation out of the main loop. See: http://qt-project.org/doc/qt-4.8/threads.html
You can use a separate thread to perform calculations in the background without blocking the Qt event loop. See QThread and QConcurrent. It's common practice in processing-intensive Qt applications to have the main thread handle the GUI while "back-end" calculations are done in "worker" threads.
If rendering the data (rather than just calculating the next state) is also an intensive operation, you can also use your worker thread(s) to create a QImage, QGraphicsScene, or similar type of object, and send it pre-built to the UI thread.
If you're limited to a single thread (e.g. your platform doesn't really support threads), then you can take your algorithm and intersperse calls to QCoreApplication::proccessEvents, which will make the GUI more responsive while the activity runs. I find that using actual threads tends to be the simpler and more maintainable approach, though.
I have a game engine which used Directx 9 for rendering. I would like to be able to load sprite graphics in whilst the main update and render loop executes. Currently the engine has one main update and render loop so any loading done in this will pause the main loop whilst the graphics load.
I was looking at POSIX threads to do this. I have created the thread function and included mutex locks but the code crashes when its ran.
Here is the thread function:
void GameApp::InternalThreadEntry()
{
pthread_mutex_lock (&mutex);
for(int i = 0; i < MAX_NUMBER; i++)
{
test_loader_sprites[i].loadImage(window1,"Test_Image.tga");
}
has_finishd_loading = true;
pthread_mutex_unlock (&mutex);
}
The Code crashes in my engines render function. Im sure this is because the directx device, which is a member of the window1 instance, is accessed for loading by the thread whilst the main application accesses it for rendering.
Could you shed a little light on where im going wrong. Im new to using threads.
All the best,
Martin
Threading is often a wolf in sheeps clothing. It looks to solve so many problems, but in reality can cause so many more than they solve. Fortunately you have what many would believe a valid scenario to use an additional thread.
Having written a similar loader myself a while back, your problem looks to indeed be that you are accessing Window1 while its in use elsewhere. Basically, does your main thread do anything with Window1 while the thread would be running. If so then that would indicate it to be the problem.
The solution I found was to properly separate out data storage from the renderer. You can load in the data to a store in a thread. Once that has done, pass that information in the store to the main thread at the end of the thread (or move it directly in the main thread later). This avoids dependency on your Window1 object within the thread.
It would be a good idea to do some further reading on threads before embarking on a larger scale scenario it sounds you are working on. Working with threads is one of the more fiddly and complicated areas of software design, and from experience you can't learn them well enough.