Advice on software design of project - c++

I plan to hook up a raspberry pi to a 64x64 led matrix, and writing a sort-of boot loader software written in c++. I would like the software to be plug-and-play with custom games I make. That is, I put all the compiled code for the games into a directory and the boot loader will recognize them and, using fork() and execvp() calls, run the selected games. I want there to be different states to the loader such as: start_state, game_selection_state, preview_state, idle_state to name a few. Each state would do their own thing and a transition from state to state based on input from a input device or idle time.
Now I do not know the best way as far as the architecture to set this up. I am not even sure that a state machine is the best way to handle this. But so far, what I have come up with is I would have 2 threads.
Thread 1:
Handles all the functionality and sanity of the states. This would include starting, stopping, and data of each state. It would also include function calls to transition from state to state.
Thread 2:
Parses input from the input Device (game controller) and makes sure a transition is valid. Then using a thread 1's function calls change the state accordingly. I planned on using a message queue to send any input that is not reserved for transitioning to the current state. (E.g A & B signify transitioning, but Y is pressed and would be sent to current state) This way the state can do what ever it wants with these button presses.
Now where this gets a little fuzzy is when the user picks a game a fork() call will be made inside a thread and I have read this can cause issues. If I did do it this way I would need a way to terminate or halt thread 2 until execvp() call ends.
Now what I want to know is this a valid or best way of implementing something like this.

Related

Progress Bar with Gtkmm

Hello I am looking for a signal for gtkmm. Basically I am doing some simulations and what I want is something like this :
I assume I do 5 simulations :
progressBar.set_fraction(0);
1 simulation
progressBar.set_fraction(progressBar.get_fraction()+1/5)
2 simulation
progressBar.set_fraction(progressBar.get_fraction()+1/5)
3 simulation
progressBar.set_fraction(progressBar.get_fraction()+1/5)
4 simulation
progressBar.set_fraction(progressBar.get_fraction()+1/5)
5 simulation
progressBar.set_fraction(progressBar.get_fraction()+1/5)
But I don't know which signal I have to use and how to translate to this.
Thank you a lot for your help !!!
The pseudo code which you presented in your question should actually work - no signal is necessary. However, you could introduce a signal into your simulation for update of the progress bar. IMHO this will not solve your problem and I will try to explain why and what to do to solve it:
You provided a little bit too less context, so, that I will introduce some more assumptions: You have a main window with a button or toolbar item or menu item (or even all of them) which start the simulation.
Let's imagine you set a breakpoint at Gtk::ProgressBar::set_fraction().
Once the debugger stopped at this break point you will find the following calls on the stack trace (probably with many other calls in between):
Gtk::Main::run()
the signal handler of the widget or action which started the simulation
the function which runs the five simulations
and last the call of Gtk::ProgressBar::set_fraction().
If you could inspect the internals of Gtk::ProgressBar you would notice that everything in Gtk::ProgressBar::set_fraction() is done properly. So what's wrong?
When you call Gtk::ProgressBar::set_fraction() it probably generates an expose event (i.e. adds an event to the event queue inside of Gtk::Main with a request for its own refresh). The problem is that you probably do not process the request until all five runs of the simulation are done. (Remember that Gtk::Main::run() which is responsible for this is the uppermost/outmost call of my imaginery stack trace.) Thus, the refresh does not happen until the simulation is over - that's too late. (Btw. the authors of Gtk+ stated somewhere in the manual about their cleverness to optimize events. I.e. there might be finally only one expose event for the Gtk::ProgressBar in the event queue but this does not make your situation better.)
Thus, after you called Gtk::ProgressBar::set_fraction() you must somehow flush the event queue before doing further progress with your simulation.
This sounds like leaving the simulation, leaving the calling widget signal handler, returning to Gtk::Main::run() for further event processing and finally coming back for next simulation step - terrible idea. But we did it much simpler. For this, we use essentially the following code (in gtkmm 2.4):
while (Gtk::Main::events_pending()) Gtk::Main::iteration(false);
(This should hopefully be the same in the gtkmm version you use but if in doubt consult the manual.)
It should be done immediately after updating the progress bar fraction and before simulation is continued.
This recursively enters (parts of) the main loop and processes all pending events in the event queue of Gtk::Main and thus, the progress bar is exposed before the simulation continues. You may be concerned to "recursively enter the main loop" but I read somewhere in the GTK+ manual that it is allowed (and reasonable to solve problems like this) and what to care about (i.e. to limit the number of recursions and to grant a proper "roll-back").
What in your case is the simulation we call in general long running functions. Because such long running functions are algorithms (in libraries for anything) which shall not be polluted with any GUI stuff, we built some administrational infra structure around this basic concept including
a progress "proxy" object with an update(double) method and a signal slot
a customized progress dialog which can connect a signal handler to such a progress object (i.e. its signal slot).
The long running function gets a progress object (as argument) and is responsible to call the Progress::update() method in appropriate intervals with an appropriate progress factor. (We simply use values in the range [0, 1].)
One issue is the interval of calling the progress update. If it is called to often the GUI will slow down your long running function significantly. The opposite case (calling it not often enough) results in less responsiveness of GUI. Thus, we decided for more often progress update. To lower the time consuming of GUI, we remember the time of last update in our progress dialog and skip the next refreshs until a certain duration since last refresh is measured. Thus, the long running function has still some extra effort for progress update but it is not recognizable anymore. (A good refresh interval is IMHO 0.1 s - the perception threshold of humans but you may choose 0.05 s if in doubt.)
Flushing all pending events results in processing of mouse events (and other GTK+ signals) also. This allows another useful feature: aborting the long running function.
When the "Cancel" button of our progress dialog is pressed it sets an internal flag. If the progress is updated next time it checks the flag. If the flag became true it throws a special exception. The throw aborts the caller of the progress update (the long running function) immediately. This exception must be catched in the signal handler of the button (or whatever called the long running function). Otherwise, it would "fall through" to the event dispatcher in Gtk::Main where it is catched definitely which would abort your application. (I saw it often enough whenever I forgot to catch.) On the other hand: catching the special exception tells clearly that the long running function has been aborted (in opposition to ended by regulary return). This may or may not be something which can be stated on GUI also.
Finally, the above solution can cause another issue: It enables to start the simulation (via GUI) while a simulation is already running. This is possible because button presses for simulation start could be processed while in progress update. To prevent this, there is actually a simple solution: set a flag at start of simulation in the GUI until it has finished and prevent further starts while the flag is set. Another option can be to make the widget/action insensitive when simulation is started. This topic becomes more complicated if you have multiple distinct long running functions in your application which may or may not exclude each other - leads to something like an exclusion matrix. Well, we solved it pragmatically... (but without the matrix).
And last but not least I want to mention that we use a similar concept for output of log views (e.g. visual logging of infos, warnings, and errors while anything long running is in progress). IMHO it is always good to provide some visual action for end users. Otherwise, they might get bored and use the telephone to complain about the (too) slow software which actually steals you the time to make it faster (a vicious cycle you have to break...)

C++ - Execute function every X milliseconds

I can't seem to find a good answer to this:
I'm making a game, and I want the logic loop to be separate from the graphics loop. In other words I want the game to go through a loop every X milliseconds regardless of how many frames/second it is displaying.
Obviously they will both be sharing a lot of variables, so I can't have a thread/timer passing one variable back and forth... I'm basically just looking for a way to have a timer in the background that every X milliseconds sends out a flag to execute the logic loop, regardless of where the graphics loop is.
I'm open to any suggestions. It seems like the best option is to have 2 threads, but I'm not sure what the best way to communicate between them is, without constantly synchronizing large amounts of data.
You can very well do multithreading by having your "world view" exchanged every tick. So here is how it works:
Your current world view is pointed to by a single smart pointer and is read only, so no locking is necessary.
Your logic creates your (first) world view, publishes it and schedules the renderer.
Your renderer grabs a copy of the pointer to your world view and renders it (remember, read-only)
In the meantime, your logic creates a new, slightly different world view.
When it's done it exchanges the pointer to the current world view, publishing it as the current one.
Even if the renderer is still busy with the old world view there is no locking necessary.
Eventually the renderer finishes rendering the (old) world. It grabs the new world view and starts another run.
In the meantime, ... (goto step 4)
The only locking you need is for the time when you publish or grab the pointer to the world. As an alternative you can do atomic exchange but then you have to make sure you use smart pointers that can do that.
Most toolkits have an event loop (built above some multiplexing syscall like poll(2) -or the obsolete select-...), e.g. GTK has g_application_run (which is above:) gtk_main which is built above Glib main event loop (which in fact does a poll or something similar). Likewise, Qt has QApplication and its exec methods.
Very often, you can register timers within the event loop. For GTK, use GTimers, g_timeout_add etc. For Qt learn about its timers.
Very often, you can also register some idle or background processing, which is one of your function which is started by the event loop after other events and timeouts have been processed. Your idle function is expected to run quickly (usually it does a small step of some computation in a few milliseconds, to keep the GUI responsive). For GTK, use g_idle_add etc. IIRC, in Qt you can use a timer with a 0 delay.
So you could code even a (conceptually) single threaded application, using timeouts and idle processing.
Of course, you could use multi-threading: generally the main thread is running the event loop, and other threads can do other things. You have synchronization issues. On POSIX systems, a nice synchronization trick could be to use a pipe(7) to self: you set up a pipe before running the event loop, and your computation threads may write a few bytes on it, while the main event loop is "listening" on it (with GTK, using g_source_add_poll or async IO or GUnixInputStream etc.., with Qt, using QSocketNotifier etc....). Then, in the input handler running in the main loop for that pipe, you could access traditional global data with mutexes etc...
Conceptually, read about continuations. It is a relevant notion.
You could have a Draw and Update Method attached to all your game components. That way you can set it that while your game is running the update is called and the draw is ignored or any combination of the two. It also has the benefit of keeping logic and graphics completely separate.
Couldn't you just have a draw method for each object that needs to be drawn and make them globals. Then just run your rendering thread with a sleep delay in it. As long as your rendering thread doesn't write any information to the globals you should be fine. Look up sfml to see an example of it in action.
If you are running on a unix system you could use usleep() however that is not available on windows so you might want to look here for alternatives.

SDL_PollEvent vs SDL_WaitEvent

So I was reading this article which contains 'Tips and Advice for Multithreaded Programming in SDL' - https://vilimpoc.org/research/portmonitorg/sdl-tips-and-tricks.html
It talks about SDL_PollEvent being inefficient as it can cause excessive CPU usage and so recommends using SDL_WaitEvent instead.
It shows an example of both loops but I can't see how this would work with a game loop. Is it the case that SDL_WaitEvent should only be used by things which don't require constant updates ie if you had a game running you would perform game logic each frame.
The only things I can think it could be used for are programs like a paint program where there is only action required on user input.
Am I correct in thinking I should continue to use SDL_PollEvent for generic game programming?
If your game only updates/repaints on user input, then you could use SDL_WaitEvent. However, most games have animation/physics going on even when there is no user input. So I think SDL_PollEvent would be best for most games.
One case in which SDL_WaitEvent might be useful is if you have it in one thread and your animation/logic on another thread. That way even if SDL_WaitEvent waits for a long time, your game will continue painting/updating. (EDIT: This may not actually work. See Henrik's comment below)
As for SDL_PollEvent using 100% CPU as the article indicated, you could mitigate that by adding a sleep in your loop when you detect that your game is running more than the required frames-per-second.
If you don't need sub-frame precision in your input, and your game is constantly animating, then SDL_PollEvent is appropriate.
Sub-frame precision can be important for, eg. games where the player might want very small increments in movement - quickly tapping and releasing a key has unpredictable behavior if you use the classic lazy method of keydown to mean "velocity = 1" and keyup to mean "velocity = 0" and then you only update position once per frame. If your tap happens to overlap with the frame render then you get one frame-duration of movement, if it does not you get no movement, where what you really want is an amount of movement smaller than the length of a frame based on the timestamps at which the events occurred.
Unfortunately SDL's events don't include the actual event timestamps from the operating system, only the timestamp of the PumpEvents call, and WaitEvent effectively polls at 10ms intervals, so even with WaitEvent running in a separate thread, the most precision you'll get is 10ms (you could maybe approximate smaller by saying if you get a keydown and keyup in the same poll cycle then it's ~5ms).
So if you really want precision timing on your input, you might actually need to write your own version of SDL_WaitEventTimeout with a smaller SDL_Delay, and run that in a separate thread from your main game loop.
Further unfortunately, SDL_PumpEvents must be run on the thread that initialized the video subsystem (per https://wiki.libsdl.org/SDL_PumpEvents ), so the whole idea of running your input loop on another thread to get sub-frame timing is nixed by the SDL framework.
In conclusion, for SDL applications with animation there is no reason to use anything other than SDL_PollEvents. The best you can do for sub-framerate input precision is, if you have time to burn between frames, you have the option of being precise during that time, but then you'll get weird render-duration windows each frame where your input loses precision, so you end up with a different kind of inconsistency.
In general, you should use SDL_WaitEvent rather than SDL_PollEvent to release the CPU to the operating system to handle other tasks, like processing user input. This will manifest to you users as sluggish reaction to user input, since this can cause a delay between when they enter a command and when your application processes the event. By using SDL_WaitEvent instead, the OS can post events to your application more quickly, which improves the perceived performance.
As a side benefit, users on battery powered systems, like laptops and portable devices should see slightly less battery usage since the OS has the opportunity to reduce overall CPU usage since your game isn't using it 100% of the time - it would only be using it when an event actually occurs.
This is a very late response, I know. But this is the thread that tops a Google search on this, so it seems the place to add an alternative suggestion to dealing with this that some might find useful.
You could write your code using SDL_WaitEvent, so that, when your application is not actively animating anything, it'll block and hand the CPU back to the OS.
But then you can send a user-defined message to the queue, from another thread (e.g. the game logic thread), to wake up the main rendering thread with that message. And then it goes through the loop to render a frame, swap and returns back to SDL_WaitEvent again. Where another of these user-defined messages can be waiting to be picked up, to tell it to loop once more.
This sort of structure might be good for an application (or game) where there's a "burst" of animation, but otherwise it's best for it to block and go idle (and save battery on laptops).
For example, a GUI where it animates when you open or close or move windows or hover over buttons, but it's otherwise static content most of the time.
(Or, for a game, though it's animating all the time in-game, it might not need to do that for the pause screen or the game menus. So, you could send the "SDL_ANIMATEEVENT" user-defined message during gameplay, but then, in the game menus and pause screen, just wait for mouse / keyboard events and actually allow the CPU to idle and cool down.)
Indeed, you could have self-triggering animation events. In that the rendering thread is woken up by a "SDL_ANIMATEEVENT" and then one more frame of animation is done. But because the animation is not complete, the rendering thread itself posts a "SDL_ANIMATEEVENT" to its own queue, that'll trigger it to wake up again, when it reaches SDL_WaitEvent.
And another idea there is that SDL events can carry data too. So you could supply, say, an animation ID in "data1" and a "current frame" counter in "data2" with the event. So that when the thread picks up the "SDL_ANIMATEEVENT", the event itself tells it which animation to do and what frame we're currently on.
This is a "best of both worlds" solution, I feel. It can behave like SDL_WaitEvent or SDL_PollEvent at the application's discretion by just sending messages to itself.
For a game, this might not be worth it, as you're updating frames constantly, so there's no big advantage to this and maybe it's not worth bothering with (though even games could benefit from going to 0% CPU usage in the pause screen or in-game menus, to let the CPU cool down and use less laptop battery).
But for something like a GUI - which has more "burst-y" animation - then a mouse event can trigger an animation (e.g. opening a new window, which zooms or slides into view) that sends "SDL_ANIMATEEVENT" back to the queue. And it keeps doing that until the animation is complete, then falls back to normal SDL_WaitEvent behaviour again.
It's an idea that might fit what some people need, so I thought I'd float it here for general consumption.
You could actually initialise the SDL and the window in the main thread and then create 2 more threads for updates(Just updates game states and variables as time passes) and rendering(renders the surfaces accordingly).
Then after all that is done, use SDL_WaitEvent in your main thread to manage SDL_Events. This way you could ensure that event is managed in the same thread that called the sdl_init.
I have been using this method for long to make my games work in windows and linux and have been able to successfully run 3 threads at the same time as mentioned above.
I had to use mutex to make sure that textures/surfaces can be transformed/changed in the update thread as well by pausing the render thread, and the lock is called every once 60 frames, so its not going to cause major perf issues.
This model works best to create event driven games, run time games, or both.

What is the best way to handle event with SDL/C++

I am using SDL for the view parts of my game project. And I want to handle key press events without interrupting the main thread. So I decided to run an infinite loop in another view thread to catch any events and inform the main thread. However, I am not sure that this is the best since this may cause a workload and decrease the system performace? Is there any better way to do this kind of things?
Thanks.
Don't bother with another thread. What's the point?
What does your main thread do? I imagine something like this:
Update Logic
Render
Goto 1
If you receive input after (or during) the update cycle then you have to wait till the next update cycle before you'll see the effects. The same is true during rendering. You might as well just check for input before the update cycle and do it all singlethreaded.
Input
Update Logic
Render
Goto 1
Multithreading gains nothing here and just increases complexity.
For some added reading, check out Christer Ericson's blog post about input latency (he's the director of technology for the team that makes God of War).
And I want to handle key press events without interrupting the main thread.
SDL is not inherently an interrupt or event driven framework. IO occurs by reading events off of the event queue by calling SDL_WaitEvent or SDL_PollEvent. This must occur in the "main" thread, the one that called SDL_SetVideoMode.
That's not to say you cannot use multiple threads, and there's good justification for doing so, for instance, it can simplify network communication if it doesn't have to rely on the SDL event loop. If you want the simulation to occur in a separate thread, then it can pass information back and forth through synchronized shared objects. In particular, you can always put events into the SDL event queue safely from any thread.

How to check if an application is in waiting

I have two applications running on my machine. One is supposed to hand in the work and other is supposed to do the work. How can I make sure that the first application/process is in wait state. I can verify via the resources its consuming, but that does not guarantee so. What tools should I use?
Your 2 applications shoud communicate. There are a lot of ways to do that:
Send messages through sockets. This way the 2 processes can run on different machines if you use normal network sockets instead of local ones.
If you are using C you can use semaphores with semget/semop/semctl. There should be interfaces for that in other languages.
Named pipes block until there is both a read and a write operation in progress. You can use that for synchronisation.
Signals are also good for this. In C it is called sendmsg/recvmsg.
DBUS can also be used and has bindings for variuos languages.
Update: If you can't modify the processing application then it is harder. You have to rely on some signs that indicate the progress. (I am assuming you processing application reads a file, does some processing then writes the result to an output file.) Do you know the final size the result should be? If so you need to check the size repeatedly (or whenever it changes).
If you don't know the size but you know how the processing works you may be able to use that. For example the processing is done when the output file is closed. You can use strace to see all the system calls including the close. You can replace the close() function with the LD_PRELOAD environment variable (on windows you have to replace dlls). This way you can sort of modify the processing program without actually recompiling or even having access to its source.
you can use named pipes - the first app will read from it but it will be blank and hence it will keep waiting (blocked). The second app will write into it when it wants the first one to continue.
Nothing can guarantee that your application is in waiting state. You have to pass it some work and get back a response. It might be transactions or not - application can confirm that it got the message to process before it starts to process it or after it was processed (successfully or not). If it does not wait, passing a piece of work should fail. Whether when trying to write to a TCP/IP socket or other means, or if timeout occurs. This depends on implementation, what kind of transport you are using and other requirements.
There is actually a way of figuring out if the process (thread) is in blocking state and waiting for data on a socket (or other source), but that means that client should be on the same computer and have access privileges required to do that, but that makes no sense other than debugging, which you can do using any debugger anyway.
Overall, the idea of making sure that application is waiting for data before trying to pass it that data smells bad. Not to mention the racing condition - what if you checked and it was OK, and when you actually tried to send the data, you found out that application is not waiting at that time (even if that is microseconds).