C++ windows pause timer - c++

I'm using SetTimer to create a timer in my application with an interval of 50ms, this timer is going to be used to draw an object, and move it around.
Now I don't want the object to move when the window is out of focus, but I still need it to be painted.
And having it paint every 50ms, seems unnecessary. And performance is extremely important in this project.
So I need a way to pause the timer, but still draw the object, but preferably only when needed.

When you get WM_ACTIVATE, decide whether to call KillTimer or SetTimer.

Just a note aside to you initial problem.
I would not rely on Windows Timer with such low resolution. It's a low priority message and if you have lots of heavy duty things going on in you program, the timer might be invoked at higher intervals that you expect.

Related

Strategy for creating a fadable child window

I'm attempting to create a window class that supports fading in and out, even for child windows. Basically it adds the WS_EX_LAYERED style to the window, and then it calls SetLayeredWindowAttributes on a timer, gradually changing the alpha value.
That approach is okay, but of course the fading will become temporarily interrupted if there are higher priority messages that come through the thread's message queue. So, for example, if there's some resize event going on somewhere, the fading will slow or temporarily stop.
I'm wondering if there's a strategy to somehow avoid this. So far my only solution is to create the fadable window on its own thread, so the timer messages don't get interrupted by anything. That solution is feasible, but it does add some additional threading complexity, so I was hoping to avoid it if possible. Thanks for any input.

C++ Win32 realtime painting performance - how to know when the application can paint without using all CPU time

In a display application we do use a large Window painting area. The display application gets so many updates for painting realtime data that all CPU time of the PC is used for painting. We do use InvalidateRect() and then paint the items in WM_PAINT message.
So we decided to use a dirty flag for each item to paint for reducing painting it.
How to know when the application can paint the items so that not all CPU time is consumed. Is there anything telling us that we can do our paint stuff now ?
If the data is updating so fast that painting each update is too much, you can use a timer. Every (say) quarter second, the timer fires, and if any items are dirty, the timer handler calls InvalidateRect(). Updating the data no longer invalidates; only the timer handler does that.
Edit: You could query Windows for the CPU load and if it's low, do the Invalidate immediately; see How to get system cpu/ram usage in c++ on Windows
One method I've used is to make sure that only one paint event is on the event queue at a time. You can use a boolean flag to mark when you begin updating and then reset the flag at the end of the WM_PAINT message (the end of the update process). Of course, if you try to update the window again and the flag is already set, then don't do anything. This will keep extra events from being piled into the queue, which can bog down your system. It looks like you may have thought of this, but do this with the entire update in addition to the individual items. Keep in mind that I'm only thinking of the updating of the windows themselves and not any underlying data.
One other thing I had to do was to "pump" (or process) the message queue during my (application) updates because updating a window (in my case) took several messages, ending with the WM_PAINT.
Another thing to watch out for is to not use idle messages for updating your interface. This is a quick and dirty way of having the update happen automatically, but ends up being a really bad idea because the idling only happens when there are no other events on the message queue. Of course, any time you move the mouse or press keys those events are placed onto the event queue and causes a "stall" of the update process. The idle events can end up coming so fast that it causes your application to use most of the CPU processing power just for displaying data that hasn't even changed. It's better to have your GUI only update when the underlying data it displays actually updates.
I had data coming in at 60Hz and updating lots of lists with columns of data as well as 3D stuff going on. I finally had to prioritize the updates and just not update the lists for each cycle, but DO update the 3D data each cycle. Updaing the lists at about 1-5 Hz was good enough for me and when combined with the techniques above resulted in a much improved and responsive system.

Exact delay in screen draw and keyboard keypress event in Qt

I am working on a Qt project in which exact time at which certain events occur is of prime importance. To be specific: I have a very simple animation that must be drawn to the screen at certain time say t1. Once I issue the QWidget update to start the animation, it will take a small time dt (depending on screen refresh rates etc.) to actually show the update on screen. I need to measure this extra time dt. I am unsure as to how to do it.
I thought of using QTime and QElapsedTimer object in the paint event of the QWidget but I'm not sure if that would achieve my goal.
Similarly, when the user presses a key it will be registered after a small delay based on the polling rate of the keyboard. I need to account for this delay as well. If I could get the polling rate I know on average how much will the delay be.
What you're asking for is--by definition--not possible from within the computer.
How would you expect to be able to tell when a pixel "actually showed up" on the screen, without a sensor stuck to the monitor and synchronized to an atomic clock the computer has access to also? :-)
The odds are stacked even further against Qt because it's generally used as an abstraction layer on top of Win/OSX/Linux. Those weren't Real-Time Operating Systems of any kind in the first place.
All you can know is when you asked for something to happen. Then you can time how long it takes for you to get back control to make another request. You can set some expectations on your basic "frame rate" throughput by doing this, but there are countless factors that could lead to wide variations in performance at any moment in time.
If you can dig through to the kernel/driver level you can find out a closer-to-the-metal measure of when the actual effect went to the hardware. But that's not Qt's domain, and still doesn't tell you the "actual" answer of when the effect manifested in the outside world.
About the best you're going to get out of Qt is a periodic QTimer. It can make a callback at (roughly) millisecond resolution. If that's not good enough... you're going to need a smaller boat. :-)
You might get a little boost from stuff related to the search term "high resolution timer":
Qt high-resolution timer
http://qt-project.org/forums/viewthread/31941
I thought of using QTime and QElapsedTimer object in the paint event of the QWidget but I'm not sure if that would achieve my goal.
This is, in fact, the only way to do it, and is all you can actually do. There is nothing further that can be done without resorting to a real-time operating system, custom drivers, or external hardware.
You may not need both - the QElapsedTimer measuring the time passed since the last update is sufficient.
Do note that when the event loop is empty, the delay between invocation of widget.update() and the paintEvent executing is under a microsecond, assuming that your process wasn't preempted.
it is a reaction time experiment for some studies. A visual input is presented to which the user responds via keyboard or mouse. To be able to find the reaction time precisely I need to know when was the stimulus presented on the screen and when was the key pressed.
There is essentially only one way of doing it right without resorting to a realtime operating system or a custom driver, and a whole lot of ways of doing it wrong. So, what's the right way?
A small area of the screen needs to change color or brightness coincidentally with the presentation of the visual stimulus. You attach a fiber optic to the screen, and feed it into a receiver attached to an external event timer. The contact closure in the keyboard is also fed to the same event timer. This lets you precisely time the latency of the response with no regard for operating system latencies, thread preemption, etc. The event timer can be something as cheap as an Arduino, if you are willing to do a bit more development work.
If you are showing the stimulus repetitively and need a certain timing between stimulus presentations, you simply repeat the presentation often and collect both response latency and stimulus-to-stimulus timing in your data. You can then discard the presentations that were outside of desired tolerances.
This approach is screen-agnostic and you can use it even on a mobile device, as long as it can somehow interface with your timer hardware. The timer hardware can of course be networked, making interfacing easy.

How to move objects with Window's C++ Graphics (GDI+) library?

So far, I've only copy/pasted the example that Microsoft has here (but I removed #include ).
I'm trying to figure out how OnPaint is continually called (as to have moving objects), but it doesn't seem to be called more than once.
How do I use the standard Windows C++ graphics library (i.e. GDI+, or another standard Windows API) to create a moving object? Must I call OnPaint myself? Or is there an easy fix to make it continually be called? Or is it just not possible?
The OnPaint() method will only run when Windows thinks that your window needs to be repainted. Which, typically, happens just once when your window is first created. Or when you minimize and restore the window.
To force it to run more than once and animate something, you'll have to tell it that a repaint is required. Best way to do so is by using a timer, it will give you an animation clock. Set the interval to a number that's a bit less than a multiple of 15.625 millseconds. 45 msec is a decent value, it gets you 21 updates per second. Assuming that you can paint fast enough. Call InvalidateRect() in the WM_TIMER message handler. Or Invalidate() if you use Winforms.

Qt: what happens if you send out signals too quickly?

Here is the situation:
You have one long-running calculation running in a background thread.
This calculation is sending out a signal to, for example, refresh a GUI element, every 100 msec.
Let's say it sends out 100 such signals.
The widget being redrawn takes more than 100 msec to redraw; let's say 1 second.
What happens in the event loop? Do the signal calls "pile up" until they are all executed (i.e. 100 seconds)? Is there any mechanism for "dropping" events?
User events are never discarded. If you queue emitted signal events faster than you can process them, your event queue will grow until you run out of memory and your program will crash. It's worth noting, though, that QTimer will skip timeout events if the system is under heavy load. To some extent, that may help regulate your throughput.
You could also consider sending feedback from one thread to the other (an acknowledgement, perhaps), and manually adjust your timing in the producer thread based on how far behind the consumer thread is. Or, you could use a metaphorical sledgehammer and switch to a blocking queued connection.
In your example, you could measure the drawing time in the widget. If the drawing takes for example 240 ms, then you could process the next 2 signals quickly without drawing anything at all. That way the signals wouldn't pile up.
Edit:
Actually there is a slight problem in my solution. The last signal should always cause a redraw, otherwise the widget would show wrong data when the calculation is finished.
When a signal is skipped, a single shot timer could be started for example with a 150 ms interval. When a redraw is done because of a signal, this timer would be stopped. So after the last redraw signal, this single shot timer would cause the drawing of the final state. I guess this would work, but it would be quite complicated.
Starting a simple timer to do the redrawing when the calculation starts would quite probably be a better approach. If the drawing of the widget takes a lot of time, the timer interval could be dynamically adjusted according to the draw time.