Could ticker interrupts interfere with hardware interrupts? - c++

Background
I was wondering if using ticker interrupts could interfere with hardware interrupts triggered by a button press.
Example
Imagine I would like to use these two types of interrupts:
a ticker timer to update a progress bar on a small display every n second
a hardware interrupt that starts/stops the process whose progress is displayed if the user presses a button
Important: Both interrupts set shared global volatile flags.
Main question
Would it be possible for the ticker interrupt to occur during a button induced interrupt and as a result for the program to end up in a state where the global flags are set contradictorily?
More specific questions
Does a hardware and a software interrupt have the same 'rank'?
If they occured at the same time, would the interrupt request occurring slightly later (but still overlapping with the first one) be ignored, or just put into a queue and execute straight after the first interrupt has finished? In this case, the flags would be set in an unexpected manner.
Can I disable one type of the interrupts inside the other type's ISR - i.e. ignore it?
I hope the problem statement is clear enough even without a code example.

I'mm assuming you are using an AVR.
When an interrupt fires, other interrupts are disabled while the interrupt routine is running. So any interrupts that occur in this time simply get flagged. When the interrupt routine returns, the global interrupt flag is re-enabled and any polled interrupts can then fire one at a time.
You can manually enable the global interrupts inside the routine for critical things that must run, but are disabled by default.
EDIT:
Is there a way to disable this flag setting? I don't want the ticker timer to perform an interrupt once the button has been pressed. This is why I asked about ranks and the ability to disable on type of interrupt, if there is such a thing
You can clear the pending interrupt, however you'll have to read the datasheet for your Arduino's AVR. You need to find the register for the external interrupt.
For example, on an atmega328p, external interrupt 0 can be cleared by setting its flag bit to 1:
EIFR |= (1 << INTF2);
EIFR = External Interrupt Flag Register
INTF2 = Bit 0 – INTF0: External Interrupt Flag 0
However, it may be far simpler to poll the button in your loop() function. Or at best, simply set a flag for you to act upon back in the loop() function. There you would be able to decide if you want to react or ignore to the interrupt
There is the issue of having your interrupts far too large. If you use timing, or require accuracy, this could be affected by a large amount over time. As the interrupt queue length is only 1 deep some interrupts could be lost. And the interrupt which powers millis() & micros() runs multiple times per millisecond, so a bulky interrupt could end up slowing down time.
Also do you have any debouncing code or hardware?
If not, the interrupt handling the button could be run multiple times on a single press.

Related

Multiple triggers on a single interruption on Arduino

I am working on Arduinos and I would like to use interrupts to use a rotary encoder. But I would like to reduce the interrupt code to the minimum.
Can I use multiple triggers on a single interrupt?
I would like to replcace my actual code :
attachInterrupt(0, ChangeA, CHANGE);
To something like
attachInterrupt(0, FailingA, FAILING);
attachInterrupt(0, RisingA, RISING);
Is it possible?
No, not exactly possible. The external interrupt is configured to react only to one of the possibilities internally.
One thing you can do is to use a CHANGE interrupt service routine and test the value of the pin at the beginning of the ISR to do FallingA() or RisingA().
There is a potential problem here. The pin may have changed again before you test it, so the interrupt that triggered the interrupt could have been falling, say, and your test determines that it was rising. One way to guard against two quick interrupts is to check if the flag is still set. It should have been cleared if at the start of the interrupt, and if it is set then a change happened again. There is no practical way to guard against three quick changes.
If you really want to use two separate ISRs, do you have an extra pin available? If so, you could just wire the pins together and run a FALLING ISR on one and a RISING ISR on the other.

what happens if interrupt occurs while ISR running?

I am programming arduino, I attached an interrupt on pin2 falling edge. While I am in the ISR and the ISR has not executed all the lines. Before finishing all the lines if falling edge comes again what happens? Does interrupt start from begining or ignor it. Here I am talking about only interrupt on pin2.
The Atmel processor disables interrupts when an interrupt is taken:
(Section 4.4: Bit 7 – I: Global Interrupt Enable)
The Global Interrupt Enable bit must be set for the interrupts to be
enabled. The individual interrupt enable control is then performed
in separate control registers. If the Global Interrupt Enable Register
is cleared, none of the interrupts are enabled independent of the
individual interrupt enable settings. The I-bit is cleared by hardware
after an interrupt has occurred, and is set by the RETI instruction to
enable subsequent interrupts. The I-bit can also be set and cleared by
the application with the SEI and CLI instructions, as described in the
instruction set reference.
Further:
External Interrupt Flag Register – EIFR
• Bits 7..0 – INTF6, INTF3 - INTF0: External Interrupt Flags 6, 3 - 0
When an edge or logic change on the INT[6;3:0] pin triggers an
interrupt request, INTF7:0 becomes set (one). If the I-bit in SREG and
the corresponding interrupt enable bit, INT[6;3:0] in EIMSK, are set
(one), the MCU will jump to the interrupt vector. The flag is cleared
when the interrupt routine is executed. Alternatively, the flag can be
cleared by writing a logical one to it. These flags are always cleared
when INT[6;3:0] are configured as level interrupt. Note that when
entering sleep mode with the INT3:0 interrupts disabled, the input
buffers on these pins will be disabled. This may cause a logic change
in internal signals which will set the INTF3:0 flags.
In other words, when another interrupt is detected, the flag register will have that bit set, and that interrupt taken when interrupts are enabled again (at return from interrupts if no separate action is taken).
http://www.atmel.com/Images/Atmel-7766-8-bit-AVR-ATmega16U4-32U4_%20Datasheet.pdf
If you want to, you could implement code that enables interrupt during that interrupt service routine, but you have to make sure that the code after such a point is fully re-entrant, and/or mask the current interrupt (some interrupt service routines are pretty darn hard to handle when you don't get another interrupt soon after, and it gets almost impossible if you get another one when you are currently in that handler). However, it is often the case for proper operating systems to enable all other interrupts - which means writing to the EIMSK register.
As a general rule, it's best to simply collect the necessary information in the interrupt handler, store it away in "safe" place (circular buffers are good for this), and signal that new data is available to a regular task in the system, and process the data there.
[Additionally, as far as I can tell, there is nothing stopping function calls inside an interrupt - as long as you understand what you are doing and there is no problems for example from calling the function from both the interrupt and the regular code at the same time]

Detecting and recovering from Windows TDR?

I've run into an odd issue with some OpenCL code that I'm working on where every once in a blue moon, Windows TDR will kick in and reset the GPU. The offending kernel runs for only 150ms and will run thousands of times (over the course of many hours) before the TDR kills it off, so I'm certain that the kernel itself isn't to blame.
My concern is that once the TDR kicks in, the kernel dies and the program is stuck in an eternal state of limbo. From what I can tell the call to clFinish never returns.
Is there a way to detect if a kernel has died off so that it can be handled gracefully?
I managed to come up with a solution, although it's far from optimal.
I've modified the program so that the OpenCL processing is done in a separate thread. I created a global shared watchdog variable between the parent and child process. When the parent spawns the processing function as a thread, it sets the variable to the current time in milliseconds. When the processing thread finishes, it reset the watchdog variable to zero.
While the parent thread waits for the processing thread to finish, it keeps an eye on the watchdog timer. If the timer exceeds a certain threshold then the program forcefully terminates itself without waiting for the child process to return.
This solution works with or without Windows TDR set. If TDR is set and the driver resets, the call to clFinish() will never return and the parent will terminate once the watchdog timer trips. If TDR is not set, the runaway process will freeze the display, but once the watchdog timer trips, the parent will terminate processing, ending the freeze.
Now that I have a watchdog set up, I simply wrapped my program in a script: if it terminated in error (positive return code) then the program is rerun.
Ideally, you should get an error code from clFinish or clWaitForEvents with the OpenCL event object generated when enqueuing the kernel. Since TDR resets the graphics driver, I don't think any OpenCL implementation will work reliably, meaning there is no recovery route.
Rather disable TDR completely. It is only worthwhile when you debug code that gets stuck in an infinite loop that permanently keeps the GPU busy.
If you want to keep TDR but can change the code then using some sort of thread sleep function to delay your code for a few milliseconds could also alleviate this problem, at the expense of sacrificing processing speed. This gives the graphics card a chance to respond to display rendering commands so that TDR is not triggered.

How to test Interrupt Latency?

Windows Embedded Compact 7.
Is there a way to test interrupt latency time from user space?
Are there any tools provided as part of platform builder?
I also saw a program called Intrtime.exe - but no examples on how to use it.
How does one test the interrupt latency time?
Reference for Intrtime.exe but how do I implement it?
http://www.ece.ufrgs.br/~cpereira/temporeal_pos/www/WindowsCE2RT.htm
EDIT
Also found:
ILTiming.exe Real-Time Measurement Tool (Compact 2013)
http://msdn.microsoft.com/en-us/library/ee483144.aspx
This really is a test that requires hardware, and there are a couple "latencies" you might measure. Once is the time from the interrupt signal to when the driver ISR reacts and the second is from when the interrupt occurs to when an IST reacts.
I did this back in the CE 3.0/CE 4.0 days by attaching a signal generator to an interruptable input an then having an ISR pulse a second input and an IST pulse a third input when they received the interrupt. I hooked a scope up to the input and outputs and used it to measure time between the input signal and output signals to get not just latency, but also jitter. You could easily add a 4th line for CE 7 so you could check an IST in user space and an IST in kernel space. I'd definitely be interested to see the results.
I don't think you can effectively measure this with software running on the platform, as you get into the problem of the code trying to do the measurement affecting the results. You're also talking time way, way below the system tick resolution so the scheduler is going to be problematic as well. CeLog might be able to get you an idea on these times, but getting it set up and running is probably more work than just hooking up a scope.
What is usually meant by interrupt latency is the time between an interrupt source asserting the interrupt line and a thread (sometimes in user-space) being scheduled and then executing as a result.
Unless your CPU has some accurate way of time-stamping interrupt events as they arrive at the CPU (rather than when an ISR runs), the only truly accurate measurement is one done externally - by measuring the time between a the interrupt line being asserted and some observable signal that the thread responding to the interrupt can control. A DSO or logic analyser is usually used for this purpose.
Software techniques usually rely on storing an accurate time-stamp at the earliest opportunity in an ISR. If you're certain the time between interrupt line becoming asserted and the ISR running is negligible, this might be valid. If, on the other hand, disabling of interrupts is being used to control concurrency, or interrupts are nested, you probably want to be measuring this as well.

SDL_PollEvent vs SDL_WaitEvent

So I was reading this article which contains 'Tips and Advice for Multithreaded Programming in SDL' - https://vilimpoc.org/research/portmonitorg/sdl-tips-and-tricks.html
It talks about SDL_PollEvent being inefficient as it can cause excessive CPU usage and so recommends using SDL_WaitEvent instead.
It shows an example of both loops but I can't see how this would work with a game loop. Is it the case that SDL_WaitEvent should only be used by things which don't require constant updates ie if you had a game running you would perform game logic each frame.
The only things I can think it could be used for are programs like a paint program where there is only action required on user input.
Am I correct in thinking I should continue to use SDL_PollEvent for generic game programming?
If your game only updates/repaints on user input, then you could use SDL_WaitEvent. However, most games have animation/physics going on even when there is no user input. So I think SDL_PollEvent would be best for most games.
One case in which SDL_WaitEvent might be useful is if you have it in one thread and your animation/logic on another thread. That way even if SDL_WaitEvent waits for a long time, your game will continue painting/updating. (EDIT: This may not actually work. See Henrik's comment below)
As for SDL_PollEvent using 100% CPU as the article indicated, you could mitigate that by adding a sleep in your loop when you detect that your game is running more than the required frames-per-second.
If you don't need sub-frame precision in your input, and your game is constantly animating, then SDL_PollEvent is appropriate.
Sub-frame precision can be important for, eg. games where the player might want very small increments in movement - quickly tapping and releasing a key has unpredictable behavior if you use the classic lazy method of keydown to mean "velocity = 1" and keyup to mean "velocity = 0" and then you only update position once per frame. If your tap happens to overlap with the frame render then you get one frame-duration of movement, if it does not you get no movement, where what you really want is an amount of movement smaller than the length of a frame based on the timestamps at which the events occurred.
Unfortunately SDL's events don't include the actual event timestamps from the operating system, only the timestamp of the PumpEvents call, and WaitEvent effectively polls at 10ms intervals, so even with WaitEvent running in a separate thread, the most precision you'll get is 10ms (you could maybe approximate smaller by saying if you get a keydown and keyup in the same poll cycle then it's ~5ms).
So if you really want precision timing on your input, you might actually need to write your own version of SDL_WaitEventTimeout with a smaller SDL_Delay, and run that in a separate thread from your main game loop.
Further unfortunately, SDL_PumpEvents must be run on the thread that initialized the video subsystem (per https://wiki.libsdl.org/SDL_PumpEvents ), so the whole idea of running your input loop on another thread to get sub-frame timing is nixed by the SDL framework.
In conclusion, for SDL applications with animation there is no reason to use anything other than SDL_PollEvents. The best you can do for sub-framerate input precision is, if you have time to burn between frames, you have the option of being precise during that time, but then you'll get weird render-duration windows each frame where your input loses precision, so you end up with a different kind of inconsistency.
In general, you should use SDL_WaitEvent rather than SDL_PollEvent to release the CPU to the operating system to handle other tasks, like processing user input. This will manifest to you users as sluggish reaction to user input, since this can cause a delay between when they enter a command and when your application processes the event. By using SDL_WaitEvent instead, the OS can post events to your application more quickly, which improves the perceived performance.
As a side benefit, users on battery powered systems, like laptops and portable devices should see slightly less battery usage since the OS has the opportunity to reduce overall CPU usage since your game isn't using it 100% of the time - it would only be using it when an event actually occurs.
This is a very late response, I know. But this is the thread that tops a Google search on this, so it seems the place to add an alternative suggestion to dealing with this that some might find useful.
You could write your code using SDL_WaitEvent, so that, when your application is not actively animating anything, it'll block and hand the CPU back to the OS.
But then you can send a user-defined message to the queue, from another thread (e.g. the game logic thread), to wake up the main rendering thread with that message. And then it goes through the loop to render a frame, swap and returns back to SDL_WaitEvent again. Where another of these user-defined messages can be waiting to be picked up, to tell it to loop once more.
This sort of structure might be good for an application (or game) where there's a "burst" of animation, but otherwise it's best for it to block and go idle (and save battery on laptops).
For example, a GUI where it animates when you open or close or move windows or hover over buttons, but it's otherwise static content most of the time.
(Or, for a game, though it's animating all the time in-game, it might not need to do that for the pause screen or the game menus. So, you could send the "SDL_ANIMATEEVENT" user-defined message during gameplay, but then, in the game menus and pause screen, just wait for mouse / keyboard events and actually allow the CPU to idle and cool down.)
Indeed, you could have self-triggering animation events. In that the rendering thread is woken up by a "SDL_ANIMATEEVENT" and then one more frame of animation is done. But because the animation is not complete, the rendering thread itself posts a "SDL_ANIMATEEVENT" to its own queue, that'll trigger it to wake up again, when it reaches SDL_WaitEvent.
And another idea there is that SDL events can carry data too. So you could supply, say, an animation ID in "data1" and a "current frame" counter in "data2" with the event. So that when the thread picks up the "SDL_ANIMATEEVENT", the event itself tells it which animation to do and what frame we're currently on.
This is a "best of both worlds" solution, I feel. It can behave like SDL_WaitEvent or SDL_PollEvent at the application's discretion by just sending messages to itself.
For a game, this might not be worth it, as you're updating frames constantly, so there's no big advantage to this and maybe it's not worth bothering with (though even games could benefit from going to 0% CPU usage in the pause screen or in-game menus, to let the CPU cool down and use less laptop battery).
But for something like a GUI - which has more "burst-y" animation - then a mouse event can trigger an animation (e.g. opening a new window, which zooms or slides into view) that sends "SDL_ANIMATEEVENT" back to the queue. And it keeps doing that until the animation is complete, then falls back to normal SDL_WaitEvent behaviour again.
It's an idea that might fit what some people need, so I thought I'd float it here for general consumption.
You could actually initialise the SDL and the window in the main thread and then create 2 more threads for updates(Just updates game states and variables as time passes) and rendering(renders the surfaces accordingly).
Then after all that is done, use SDL_WaitEvent in your main thread to manage SDL_Events. This way you could ensure that event is managed in the same thread that called the sdl_init.
I have been using this method for long to make my games work in windows and linux and have been able to successfully run 3 threads at the same time as mentioned above.
I had to use mutex to make sure that textures/surfaces can be transformed/changed in the update thread as well by pausing the render thread, and the lock is called every once 60 frames, so its not going to cause major perf issues.
This model works best to create event driven games, run time games, or both.