Scheduling concept in programming - user input - c++

I am curious how the user input is handled in micro-controllers in way that all other work is not blocked.
For instance, I have modern gas boiler - Vaillant, boiler is running his own tasks while I can scroll in the user menu, press buttons and so on.
How this is worked out from conceptual point of view?
Is there another micro-controller which handles user input and then it pushes selected inputs to main controller?
Or there is just some type of scheduller in main controller and it is scheduling so fast so it can handle user inputs AND background tasks?
How this is handled in general so user can play around with menu and so on without blocking the main task.
Thank You

This can be handled in many different ways and, depending on the complexity of the overall application, it can be as simple as a super-loop, or as complex as a multitasking based application with several independent tasks each doing their own thing (e.g., one doing key press detection, another dealing with serial comms, another updating the [G]LCD, etc.).
Your particular example can easily be handled with the super-loop approach, although a multitasker can also be used for (IMO) simplicity in coding.
For example, with the super-loop approach, each time through the loop you call a key press detection routine which checks if a key is pressed and counts time up to some maximum as long as the key press is still present. It does not block, it exits immediately. When the count reaches a minimum to accept the key (e.g., corresponding to about 50-100 msec) you return the key pressed and zero the counter (for auto key repeat), or save the key in a temporary buffer and return it only when the key is eventually released (if no auto key repeat is desired).
The display works in a similar way. The current screen is updated depending on which state the device is in. When the UP/DOWN key (for example) is detected, the index of the scrolling item changes up or down and the screen is redrawn with the new state.
There are certain situations that a multitasker is the only reasonable way to solve such problems if you don't want your app to become a un-debuggable mess of flags, and ifs. Dealing concurrently (and smoothly) with multiple interfaces (e.g., GPS, GSM, user terminal, key/LCD) is one such example.
BTW, interrupts for key presses are IMO an overkill unless you are in some battery saving sleep mode and need a hardware way to wake up. Human key presses are always too slow by comparison to CPU speeds and can be detected reliably by simple polling.

Most CPUs have some form of interrupts (even the PC).
Basically the interrupt tells the CPU to stop what it is doing and handle some realtime event. When the interrupt handler is complete the CPU will resume its original program.
More detailed information on interrupts is available on wikipedia

Related

How would I update a variable continuously and also wait for input at the same time?

In my little project, I've decided to create a game that updates a counter of the user's experience points every second, as well as printing a menu and allowing the user to navigate said menu simultaneously. The code to update the user's experience is as follows, and it works perfectly fine standalone.
double timerX = GetTickCount();
double timerY = GetTickCount();
while(true)
{
double timerZ = GetTickCount() - timerX;
double timerA = GetTickCount() - timerY;
if(timerZ >= 1000) {
userExperience = userExperience + 1;
timerX = GetTickCount();
}
if(timerA >= 1100) {
system("CLS");
refreshExperience();
timerY = GetTickCount();
}
The function 'refreshExperience()' simply prints the 'userExperience' variable onto the screen using 'cout'.
At the same time as this, my program should be able to display the main menu GUI and ask for input from the user. However, I do not want the asking of input to halt the program, especially the money updater, as it is paramount that that is updated constantly. I have attempted to use multithreading by creating a thread for the 'refreshExperience' function, and also creating a thread for asking for input, but the problem still remained - the money would only update if the user was continually inputting (pressing keys). If he was not, the money would stay the same.
Any help would be very much appreciated.
Getting input from the user with no discernible break in program execution is only possible in GUI programming. When working in the console, every request for user input will block for the obvious reason that the program has to wait to actually have the necessary data before proceeding.
This is also why you should initialize variables before declaring them; if you don't, stack-allocated variables will contain random (to you) data and the program will not function as intended. Conceptually, this is the same problem the console has, except it doesn't have the luxury of free will and can't simply choose to skip the wait.
Conceptually, programs that have a user interface work by operating with a loop. Every event that occurs, from a mouse movement to a button click, triggers an event in the Window procedure. In the Win32 API, it's just a switch statement that checks for each possible event against what actually happened. When there's a match, the system triggers that event handler.
It should be noted that it only seems like there is no lag, because usually graphical window procedures are fast enough to seem to respond instantaneously. In reality, any action on the window triggers a calculation by the computer to determine what part of the window was blocked and must be redrawn, as it is now called "invalid."
Lastly, I would highly recommend a different method for the scoreboard update. I know it's just a contrived example for you to experiment with, but that means it's just as good if not better for trying out some design patterns, namely the observer pattern. Rather than the program checking for input every possible clock cycle it can is just a waste. When you have a situation like this, it's common to use callback functions, which in C are just function pointers that you pass along. That way you don't have to check to find out when the event is triggered, you can just have the event invoke the function that you passed in as a parameter. This is how Node.js works, by the way, and how it seems to do so much at once despite being single-threaded.
If you've heard anything about Reactive programming lately -it's been getting talked about just about everywhere in the C# community these past few months- this is what it's talking about, and the reason I bring it up is because this is one of the more common, though trivial, examples of a textbook reactive programming scenario.

win32 raw keyboard input remove autorepeat

So the problem at hand is pretty much the following:
Windows key repeat settings affecting Raw Input messages
Although this might be a duplicate then, there is no answer provided, so here it goes:
I am under the impression that e.g. for FPS game development, one should use raw input. The problem then however, is that the input is not so raw after all and includes a delay (for a continuous keydown) and only after that initial delay a continuous key press, that is continuous flow of WM_INPUT messages. When using DirectInput (which is deprecated), I do not have those problems. Is there a way to achieve the same thing using only raw input? To be clear, what I want is that, if I press a key continuously, I continuously get WM_INPUT messages without the initial delay caused by autorepeat.
I am using the raw input standard read, not the buffered one (https://msdn.microsoft.com/en-us/library/windows/desktop/ms645546(v=vs.85).aspx)
Where is the difference between the aforementioned standard raw input reading and the buffered one?
DirectInput is an outdated, async abstraction layer, that does exactly the same thing: processes raw input. It is not recommended to use it unless you need to support joystick or anything legacy, for gamepads XInput is being recommended.
Windows is not a real time OS, the best option is to stick to WM_INPUT messages. This requires maintaining an array of key states (bool keyState[256]) and basing your logic as if(keyState[VK_BACKSPACE] == true){}.
If you want to also catch the press start and release events, you will have to maintain an array of last key state, and analyzing WM_INPUT check for the change, and produce the press start event only if last state of key was false and WM_INPUT message says key is pressed now.
The other option is to use GetAsyncKeyState to manually check all the input regularly. But that will leave you without the ability to catch key press if it happened between your two calls for GetAsyncKeyState. The documentation of the function says that the lower bit tells exactly that, but that bit is shared among all applications and can be reset by other app, which is sad.
If I understand you correctly, what you want is just the momentary key state, which can be easily obtained through helper classes like Keyboard, it does not use WM_INPUT though, so a minor latency may occur due to window check layer.
auto kb = keyboard->GetState();
if (kb.Back)
// Backspace key is down, with no delay of waiting for key repeat

Progress Bar with Gtkmm

Hello I am looking for a signal for gtkmm. Basically I am doing some simulations and what I want is something like this :
I assume I do 5 simulations :
progressBar.set_fraction(0);
1 simulation
progressBar.set_fraction(progressBar.get_fraction()+1/5)
2 simulation
progressBar.set_fraction(progressBar.get_fraction()+1/5)
3 simulation
progressBar.set_fraction(progressBar.get_fraction()+1/5)
4 simulation
progressBar.set_fraction(progressBar.get_fraction()+1/5)
5 simulation
progressBar.set_fraction(progressBar.get_fraction()+1/5)
But I don't know which signal I have to use and how to translate to this.
Thank you a lot for your help !!!
The pseudo code which you presented in your question should actually work - no signal is necessary. However, you could introduce a signal into your simulation for update of the progress bar. IMHO this will not solve your problem and I will try to explain why and what to do to solve it:
You provided a little bit too less context, so, that I will introduce some more assumptions: You have a main window with a button or toolbar item or menu item (or even all of them) which start the simulation.
Let's imagine you set a breakpoint at Gtk::ProgressBar::set_fraction().
Once the debugger stopped at this break point you will find the following calls on the stack trace (probably with many other calls in between):
Gtk::Main::run()
the signal handler of the widget or action which started the simulation
the function which runs the five simulations
and last the call of Gtk::ProgressBar::set_fraction().
If you could inspect the internals of Gtk::ProgressBar you would notice that everything in Gtk::ProgressBar::set_fraction() is done properly. So what's wrong?
When you call Gtk::ProgressBar::set_fraction() it probably generates an expose event (i.e. adds an event to the event queue inside of Gtk::Main with a request for its own refresh). The problem is that you probably do not process the request until all five runs of the simulation are done. (Remember that Gtk::Main::run() which is responsible for this is the uppermost/outmost call of my imaginery stack trace.) Thus, the refresh does not happen until the simulation is over - that's too late. (Btw. the authors of Gtk+ stated somewhere in the manual about their cleverness to optimize events. I.e. there might be finally only one expose event for the Gtk::ProgressBar in the event queue but this does not make your situation better.)
Thus, after you called Gtk::ProgressBar::set_fraction() you must somehow flush the event queue before doing further progress with your simulation.
This sounds like leaving the simulation, leaving the calling widget signal handler, returning to Gtk::Main::run() for further event processing and finally coming back for next simulation step - terrible idea. But we did it much simpler. For this, we use essentially the following code (in gtkmm 2.4):
while (Gtk::Main::events_pending()) Gtk::Main::iteration(false);
(This should hopefully be the same in the gtkmm version you use but if in doubt consult the manual.)
It should be done immediately after updating the progress bar fraction and before simulation is continued.
This recursively enters (parts of) the main loop and processes all pending events in the event queue of Gtk::Main and thus, the progress bar is exposed before the simulation continues. You may be concerned to "recursively enter the main loop" but I read somewhere in the GTK+ manual that it is allowed (and reasonable to solve problems like this) and what to care about (i.e. to limit the number of recursions and to grant a proper "roll-back").
What in your case is the simulation we call in general long running functions. Because such long running functions are algorithms (in libraries for anything) which shall not be polluted with any GUI stuff, we built some administrational infra structure around this basic concept including
a progress "proxy" object with an update(double) method and a signal slot
a customized progress dialog which can connect a signal handler to such a progress object (i.e. its signal slot).
The long running function gets a progress object (as argument) and is responsible to call the Progress::update() method in appropriate intervals with an appropriate progress factor. (We simply use values in the range [0, 1].)
One issue is the interval of calling the progress update. If it is called to often the GUI will slow down your long running function significantly. The opposite case (calling it not often enough) results in less responsiveness of GUI. Thus, we decided for more often progress update. To lower the time consuming of GUI, we remember the time of last update in our progress dialog and skip the next refreshs until a certain duration since last refresh is measured. Thus, the long running function has still some extra effort for progress update but it is not recognizable anymore. (A good refresh interval is IMHO 0.1 s - the perception threshold of humans but you may choose 0.05 s if in doubt.)
Flushing all pending events results in processing of mouse events (and other GTK+ signals) also. This allows another useful feature: aborting the long running function.
When the "Cancel" button of our progress dialog is pressed it sets an internal flag. If the progress is updated next time it checks the flag. If the flag became true it throws a special exception. The throw aborts the caller of the progress update (the long running function) immediately. This exception must be catched in the signal handler of the button (or whatever called the long running function). Otherwise, it would "fall through" to the event dispatcher in Gtk::Main where it is catched definitely which would abort your application. (I saw it often enough whenever I forgot to catch.) On the other hand: catching the special exception tells clearly that the long running function has been aborted (in opposition to ended by regulary return). This may or may not be something which can be stated on GUI also.
Finally, the above solution can cause another issue: It enables to start the simulation (via GUI) while a simulation is already running. This is possible because button presses for simulation start could be processed while in progress update. To prevent this, there is actually a simple solution: set a flag at start of simulation in the GUI until it has finished and prevent further starts while the flag is set. Another option can be to make the widget/action insensitive when simulation is started. This topic becomes more complicated if you have multiple distinct long running functions in your application which may or may not exclude each other - leads to something like an exclusion matrix. Well, we solved it pragmatically... (but without the matrix).
And last but not least I want to mention that we use a similar concept for output of log views (e.g. visual logging of infos, warnings, and errors while anything long running is in progress). IMHO it is always good to provide some visual action for end users. Otherwise, they might get bored and use the telephone to complain about the (too) slow software which actually steals you the time to make it faster (a vicious cycle you have to break...)

SDL_PollEvent vs SDL_WaitEvent

So I was reading this article which contains 'Tips and Advice for Multithreaded Programming in SDL' - https://vilimpoc.org/research/portmonitorg/sdl-tips-and-tricks.html
It talks about SDL_PollEvent being inefficient as it can cause excessive CPU usage and so recommends using SDL_WaitEvent instead.
It shows an example of both loops but I can't see how this would work with a game loop. Is it the case that SDL_WaitEvent should only be used by things which don't require constant updates ie if you had a game running you would perform game logic each frame.
The only things I can think it could be used for are programs like a paint program where there is only action required on user input.
Am I correct in thinking I should continue to use SDL_PollEvent for generic game programming?
If your game only updates/repaints on user input, then you could use SDL_WaitEvent. However, most games have animation/physics going on even when there is no user input. So I think SDL_PollEvent would be best for most games.
One case in which SDL_WaitEvent might be useful is if you have it in one thread and your animation/logic on another thread. That way even if SDL_WaitEvent waits for a long time, your game will continue painting/updating. (EDIT: This may not actually work. See Henrik's comment below)
As for SDL_PollEvent using 100% CPU as the article indicated, you could mitigate that by adding a sleep in your loop when you detect that your game is running more than the required frames-per-second.
If you don't need sub-frame precision in your input, and your game is constantly animating, then SDL_PollEvent is appropriate.
Sub-frame precision can be important for, eg. games where the player might want very small increments in movement - quickly tapping and releasing a key has unpredictable behavior if you use the classic lazy method of keydown to mean "velocity = 1" and keyup to mean "velocity = 0" and then you only update position once per frame. If your tap happens to overlap with the frame render then you get one frame-duration of movement, if it does not you get no movement, where what you really want is an amount of movement smaller than the length of a frame based on the timestamps at which the events occurred.
Unfortunately SDL's events don't include the actual event timestamps from the operating system, only the timestamp of the PumpEvents call, and WaitEvent effectively polls at 10ms intervals, so even with WaitEvent running in a separate thread, the most precision you'll get is 10ms (you could maybe approximate smaller by saying if you get a keydown and keyup in the same poll cycle then it's ~5ms).
So if you really want precision timing on your input, you might actually need to write your own version of SDL_WaitEventTimeout with a smaller SDL_Delay, and run that in a separate thread from your main game loop.
Further unfortunately, SDL_PumpEvents must be run on the thread that initialized the video subsystem (per https://wiki.libsdl.org/SDL_PumpEvents ), so the whole idea of running your input loop on another thread to get sub-frame timing is nixed by the SDL framework.
In conclusion, for SDL applications with animation there is no reason to use anything other than SDL_PollEvents. The best you can do for sub-framerate input precision is, if you have time to burn between frames, you have the option of being precise during that time, but then you'll get weird render-duration windows each frame where your input loses precision, so you end up with a different kind of inconsistency.
In general, you should use SDL_WaitEvent rather than SDL_PollEvent to release the CPU to the operating system to handle other tasks, like processing user input. This will manifest to you users as sluggish reaction to user input, since this can cause a delay between when they enter a command and when your application processes the event. By using SDL_WaitEvent instead, the OS can post events to your application more quickly, which improves the perceived performance.
As a side benefit, users on battery powered systems, like laptops and portable devices should see slightly less battery usage since the OS has the opportunity to reduce overall CPU usage since your game isn't using it 100% of the time - it would only be using it when an event actually occurs.
This is a very late response, I know. But this is the thread that tops a Google search on this, so it seems the place to add an alternative suggestion to dealing with this that some might find useful.
You could write your code using SDL_WaitEvent, so that, when your application is not actively animating anything, it'll block and hand the CPU back to the OS.
But then you can send a user-defined message to the queue, from another thread (e.g. the game logic thread), to wake up the main rendering thread with that message. And then it goes through the loop to render a frame, swap and returns back to SDL_WaitEvent again. Where another of these user-defined messages can be waiting to be picked up, to tell it to loop once more.
This sort of structure might be good for an application (or game) where there's a "burst" of animation, but otherwise it's best for it to block and go idle (and save battery on laptops).
For example, a GUI where it animates when you open or close or move windows or hover over buttons, but it's otherwise static content most of the time.
(Or, for a game, though it's animating all the time in-game, it might not need to do that for the pause screen or the game menus. So, you could send the "SDL_ANIMATEEVENT" user-defined message during gameplay, but then, in the game menus and pause screen, just wait for mouse / keyboard events and actually allow the CPU to idle and cool down.)
Indeed, you could have self-triggering animation events. In that the rendering thread is woken up by a "SDL_ANIMATEEVENT" and then one more frame of animation is done. But because the animation is not complete, the rendering thread itself posts a "SDL_ANIMATEEVENT" to its own queue, that'll trigger it to wake up again, when it reaches SDL_WaitEvent.
And another idea there is that SDL events can carry data too. So you could supply, say, an animation ID in "data1" and a "current frame" counter in "data2" with the event. So that when the thread picks up the "SDL_ANIMATEEVENT", the event itself tells it which animation to do and what frame we're currently on.
This is a "best of both worlds" solution, I feel. It can behave like SDL_WaitEvent or SDL_PollEvent at the application's discretion by just sending messages to itself.
For a game, this might not be worth it, as you're updating frames constantly, so there's no big advantage to this and maybe it's not worth bothering with (though even games could benefit from going to 0% CPU usage in the pause screen or in-game menus, to let the CPU cool down and use less laptop battery).
But for something like a GUI - which has more "burst-y" animation - then a mouse event can trigger an animation (e.g. opening a new window, which zooms or slides into view) that sends "SDL_ANIMATEEVENT" back to the queue. And it keeps doing that until the animation is complete, then falls back to normal SDL_WaitEvent behaviour again.
It's an idea that might fit what some people need, so I thought I'd float it here for general consumption.
You could actually initialise the SDL and the window in the main thread and then create 2 more threads for updates(Just updates game states and variables as time passes) and rendering(renders the surfaces accordingly).
Then after all that is done, use SDL_WaitEvent in your main thread to manage SDL_Events. This way you could ensure that event is managed in the same thread that called the sdl_init.
I have been using this method for long to make my games work in windows and linux and have been able to successfully run 3 threads at the same time as mentioned above.
I had to use mutex to make sure that textures/surfaces can be transformed/changed in the update thread as well by pausing the render thread, and the lock is called every once 60 frames, so its not going to cause major perf issues.
This model works best to create event driven games, run time games, or both.

What is the best way to handle event with SDL/C++

I am using SDL for the view parts of my game project. And I want to handle key press events without interrupting the main thread. So I decided to run an infinite loop in another view thread to catch any events and inform the main thread. However, I am not sure that this is the best since this may cause a workload and decrease the system performace? Is there any better way to do this kind of things?
Thanks.
Don't bother with another thread. What's the point?
What does your main thread do? I imagine something like this:
Update Logic
Render
Goto 1
If you receive input after (or during) the update cycle then you have to wait till the next update cycle before you'll see the effects. The same is true during rendering. You might as well just check for input before the update cycle and do it all singlethreaded.
Input
Update Logic
Render
Goto 1
Multithreading gains nothing here and just increases complexity.
For some added reading, check out Christer Ericson's blog post about input latency (he's the director of technology for the team that makes God of War).
And I want to handle key press events without interrupting the main thread.
SDL is not inherently an interrupt or event driven framework. IO occurs by reading events off of the event queue by calling SDL_WaitEvent or SDL_PollEvent. This must occur in the "main" thread, the one that called SDL_SetVideoMode.
That's not to say you cannot use multiple threads, and there's good justification for doing so, for instance, it can simplify network communication if it doesn't have to rely on the SDL event loop. If you want the simulation to occur in a separate thread, then it can pass information back and forth through synchronized shared objects. In particular, you can always put events into the SDL event queue safely from any thread.