Mouse coordinates lags if CPU usage is not 100% - really weird! - c++

When i make my program to use only 0-2% cpu (removed some CPU intensive opengl function), my mouse coordinates starts to lag! and when i use 100% CPU (when enabling the opengl function) i get nice and smooth mouse coordinates, note that the opengl function does nothing to my mouse coordinates. look at below image i recorded my rotation function values by using mouse coordinates:
This is with 100% cpu usage (as it should look):
no lag http://img15.imageshack.us/img15/1304/mousecursorsmoothcoords.png
-
This is with 2% cpu usage:
lag http://img5.imageshack.us/img5/5514/mousecursorlaggedcoords.png
It is really annoying problem, because i am using mouse cursor position to change the rotation angle, and with the above image case, it looks really laggy rotation.
I might be able to make own interpolation or something, but i want to know what is causing this and how to fix it.
Im getting mouse coordinates with WM_MOUSEMOVE message and i also tried to use GetCursorPos() on every frame before my rotation code, but it has no difference.
Edit: I noticed that the CPU usage doesnt have to be 100% to get smooth, but the CPU just needs to be "waken up" and then it stays smooth even with low CPU usage.

Your second graph seems like it is "bunching" updates. Jumps on the Y axis seem to be at a fixed frequency on the X axis.
Speculation:
Maybe power saving is kicking your CPU to/from a lower power state. Is this a laptop, or is CPU power saving enabled in Windows/BIOS (I'm not sure where the setting is)?
As GMan suggested in his comment, maybe it has to do with how many timeslices your app is getting
Some sort of sleep/timer functionality is regressing to a lower resolution. An example would be the difference between timeGetTime() and queryPerformanceCounter():
http://www.geisswerks.com/ryan/FAQS/timing.html

You might be able to get better information about the mouse motion by using the GetMouseMovePointsEx() API.
Sidenote: for some reason I can only see your first graphic

Related

OSX pushing pixels to screen with minimum latency

I'm trying to develop some very low-latency graphics applications and am getting really frustrated by how long it takes to draw to screen through OpenGL. Every discussion I find about it online addresses optimizing the OpenGL pipeline, but doesn't get anywhere near the results that I need.
Check this out:
https://www.dropbox.com/s/dbz4bq67cxluhs7/MouseLatency.MOV?dl=0
You probably noticed this before: With a c++ OpenGL app, dragging the mouse around the screen, and drawing the mouse location in OpenGL, the OpenGL lags behind by 3 or 4 frames. Clearly OSX CAN draw [the cursor] to screen with very low latency, but OpenGL is much slower. So let's say I don't need to do any fancy OpenGL rendering. I just want to push pixels to screen somehow. Is there a way for me to bypass OpenGL completely and draw to screen faster? Or is this kind of functionality going to be locked inside the kernel somewhere that I can't reach it?
datenwolf's answer is excellent. I just wanted to add one thing to this discussion regarding triple buffering at the compositor level, since I am very familiar with the Microsoft Windows desktop compositor.
I know you are asking about OS X here, but the implementation details I am going to discuss are the most sensible way of implementing this stuff and I would expect to see other systems work this way too.
Triple buffering as you might enable at the application level adds a third buffer to the swap-chain that is synchronized to refresh. That way of doing triple buffering does add latency, because that third buffer has to be displayed and nothing is allowed to touch it until this happens (this is D3D's mandated behavior -- the behavior and feature itself are undefined in OpenGL); but the way the Desktop Window Manager (Windows) works is slightly different.
The behavior I have seen most drivers implement for desktop composition is frame dropping. Any situation where multiple frames are finished between refreshes, all but 1 of those frames are discarded. You actually get lower latency using a window rather than fullscreen + triple buffering, because it does not block buffer swaps when the third buffer (owned by the compositor) has a finished frame waiting to be displayed.
It creates a whole different set of visual issues if framerate is not reasonably consistent. Technically, pixels belonging to dropped frames have infinite latency, so the benefits from latency reduction done this way might be worthless if you needed every single frame drawn to appear on screen.
I believe you can get this behavior on OS X (if you want it) by disabling VSYNC and drawing in a window. VSYNC basically only serves as a form of frame pacing (trade latency for consistency) in this scenario and tearing is eliminated by the compositor itself regardless what rate you draw at.
Regarding mouse cursor latency:
The cursor in any modern window system will always track with minimum latency. There is literally a feature on graphics hardware called a "hardware cursor," where the driver stores the cursor position and then once per-refresh, has the hardware overlay the cursor on top of whatever is sitting in the framebuffer waiting to be scanned-out. So even if your application is drawing at 30 FPS on a 60 Hz display, the cursor is updated every 16 ms when the hardware cursor's used.
This bypasses all graphics APIs altogether, but is quite limited (e.g. it uses the OS-defined cursor).
TL;DR: Latency comes in many forms.
If your problem is input latency, then you can mitigate that by reducing the number of pre-rendered frames and avoiding triple buffering. I could not begin to tell you how to reduce the number of driver pre-rendered frames on OS X.
Minimize length of time before something shows up on screen
If your problem is the amount of time that passes between executions of your render loop, you would go the other way. Increase pre-rendered frames, draw in a window and disable VSYNC. You may run into a lot of frames that are drawn but never displayed in this scenario.
Minimize time spent blocking (increase FPS); some frames will never be displayed
Pre-rendered frames are a powerful little feature that you do not get control over at the OpenGL API level. It sets up how deeply the driver is allowed to pipeline everything and depending on the desired task you will trade different types of latency by fiddling with it. Many gamers swear by setting this value to 1 to minimize input latency at the cost of overall framerate "smoothness."
UPDATE:
Pre-rendered frames are one reason for your multi-frame delay. Fixing this in a cross-platform way is difficult (it's a driver setting), but if you have access to Fence Sync Objects you can produce the same behavior as forcing this to 1.
I can explain this in more detail if need be, the general idea is that you insert a fence sync after the buffer swap and then wait for it to be signaled before the first command in the next frame is allowed to begin. Performance may take a nose dive, but latency will be minimized since the CPU won't be rendering ahead of the GPU anymore.
There are a number of latencies at play here.
Input event → drawing state latency
In your typical interactive application you have a event loop that usually goes
collect user input
process user input
determine what's to be drawn
draw to the back buffer
swap back to front buffer
With the usual ways in which event–update–display loops are written there's almost no delay between step 5 of the previous and step 1 of the following iteration. which means that steps 2, 3, and 4 operate with data that lags about one frame period behind.
So this is the first source of latency.
Tripple buffering / composition latency
Many graphics pipelines enable triple buffering for smoother display update. Instead of keeping only a back and a front buffer around, there's also a third buffer inbetween. The average rate at which to these buffers is drawn is the display refresh period. The buffers themself are stepped at exactly the display refresh period. So this adds another frame period of latency.
If you're running on a system with a window compositor (which is the default by MacOS X) this adds effectively another buffer stage, so if you've got a double buffer mode it gives you triple buffer and if you had a triple buffer it'd give you a "quad" buffer (quotes here, because quad buffer is a term usually used with stereoscopic rendering).
What can you do about this:
Turn off composition
Windows through the DWM API and MacOS X allow to turn off composition or bypass the compositor.
Reducing input lag
Try to collect and integrate the user input as late as possible (use high resolution sleeps). If you've got only a very simple scene you can push the drawing quite close to the V-Sync deadline; in fact the NVidia OpenGL implementation has a vendor specific extension that allows to sleep until a specific amount of time before the next V-Sync.
If your scene is complex but is separable in parts that require low latency user input and stuff where it doesn't matter so much you can draw the higher latency stuff earlier and only at the very last moment integrate user input into it. Of course if the mouse is used to control the viewing direction, or even worse you're rendering for a VR head mounted display things are going to become difficult.

OpenCv C++ record video when motion detected from cam

I am attempting to use a straightforward motion detection code to detect movement from a camera. I'm using the OpenCV library and I have some code that takes the difference between two frames to detect a change.
I have the difference frame working just fine and it's black when no motion is present.
The problem is how now i can detect that blackness to stop recording or no darkness to begin recording frames.
Thank u all.
A very simple thing to do is to sum the entire diff image into an integer. If that sum is above a threshold you have movement. Then you can use a second threshold and when the sum is below that limit you stopped having movement.
You can also make the threshold only change the program state if some elapsed time has occurred since the last threshold. i.e. after movement is detected you don't check for lack of movement for 10 seconds.
Take a look at the code of the free software motion for getting inspiring ideas.
There are quite a few things to keep in mind for reliable motion detection. For example tolerate the slow changes from the sun's rotation. Or accepting momentary image glitches which can come especially from the cheapest cameras.
From a small experience I have had, I think that better than just adding up all differences, it works better to count the number of pixels whose variation exceeds a certain threshold.
Motion also offers masks, which let you for example ignore movements in a nearby road.
What about storing a black frame internally and using your same comparison code? If your new frame is different (above a threshold) from the all-black frame, start recording.
This seems the most straightforward since you already have the image-processing algorithms down.

OpenGL 2D agents animation and scenario refresh

I'm trying to do a little game in 2D to learn how to do it and improve my programming skills. I programme the game using C++/C and OpenGL 3.0 with GLUT.
I so confused with some important concepts about animations and scenario refresh.
It's a good practice load all the textures only when the level begins ?
I choose a fps rate to 40 fps, should i redraw all the scenario and the agents in every frame or only the modifications ?
In an agent animation should i redraw all the entire agent or only the parts which changes from the past ?
If some part of the scene changes (one wall or something similar is destroyed) should i need to redraw all the entire scene or only the part which changes ?
Now my "game" works with a framerate of 40fps but the game has a flickering effect that looks really weird.
Yes, creating and deleting textures/buffers every frame is a huge waste.
It's almost always cheaper to just redraw the entire scene. GPUs are built to do this, it's very fast.
Reading the framebuffer from VRAM back to regular RAM and calculating the difference is going to be much slower, especially since OpenGL doesn't keep track of your "objects", it just takes a triangle at a time, rasterizes it, then forgets about it.
Depends on how you define the animation. If you're talking about sprite-like animation, where each frame is a separate image, then it's cheapest to just refer to the new texture and redraw.
If you've got a texture atlas, update the texture coordinates and redraw, and if you're using shaders (you pretty much have to if you want to be OpenGL 3.0), you might be able to get away with a uniform that offsets texture coordinates.
Yeah, as I said before, the hardware is built to clear the screen and redraw everything.
And for a framerate, you should be using the monitor's refresh rate to avoid vertical tearing. Pretty much all monitors now are 60Hz, so 60fps is the most common "target" framerate.
Choose either 30 or 60 fps as most modern monitors refresh in 60 Hz rate. So you have either 2 or 1 rendered frame per "monitor frame". This should reduce flickering effects. (I'm not 100% sure if you mean this with "flash effect".)
Regarding all other questions (which sound pretty much the same): In OpenGL rendering, redrawing everything is pretty common, as in most games almost the entire screen changes in every frame, for example if you're moving around. You could do a partial screen update, but it's very uncommon and more expensive on the CPU side, as you have to compute which parts to draw instead of just "draw everything".
Yes
2-4. Yes - Hopefully this help you understand why you must...
Imagine you have 2 pieces of paper. The first paper you draw a stick man standing still, and show that to somebody.
The second paper while the user is looking at that paper you draw the same thing again but this time you move the arm a little bit.
Now you show them the second paper, as they look at the second paper you clear the first paper and draw the man moving his arm a little bit more.
This is pretty much how it works and is the reason you must always render the whole image regardless if nothing has changed.

Constantly lag in opengl application

I'm getting some repeating lags in my opengl application.
I'm using the win32 api to create the window and I'm also creating a 2.2 context.
So the main loop of the program is very simple:
Clearing the color buffer
Drawing a triangle
Swapping the buffers.
The triangle is rotating, that's the way I can see the lag.
Also my frame time isn't smooth which may be the problem.
But I'm very very sure the delta time calculation is correct because I've tried plenty ways.
Do you think it could be a graphic driver problem?
Because a friend of mine run almost the exactly same program except I do less calculations + I'm using the standard opengl shader.
Also, His program use more CPU power than mine and the CPU % is smoother than mine.
I should also add:
On my laptop I get same lag every ~1 second, so I can see some kind of pattern.
There are many reasons for a jittery frame rate. Off the top of my head:
Not calling glFlush() at the end of each frame
other running software interfering
doing things in your code that certain graphics drivers don't like
bugs in graphics drivers
Using the standard windows time functions with their terrible resolution
Try these:
kill as many running programs as you can get away with. Use the process tab in the task manager (CTRL-SHIFT-ESC) for this.
bit by bit, reduce the amount of work your program is doing and see how that affects the frame rate and the smoothness of the display.
if you can, try enabling/disabling vertical sync (you may be able to do this in your graphic card's settings) to see if that helps
add some debug code to output the time taken to draw each frame, and see if there are anomalies in the numbers, e.g. every 20th frame taking an extra 20ms, or random frames taking 100ms.

box2d + cocos2d: Why is there a delay when manipulating objects in box 2d using mouseJoint

When I drag an object in my game, the object is never directly under the finger. There us this lag / delay that I cannot get rid of. It follows my finger instead of being directly underneath it. You can try in the Testbed as well. Trying moving an object really fast and the object is never underneath the mouse/finger
Is this a weakness in box2d? Or am I missing something obvious ?
Thanks in advance
That's because mouseJoint is similar to distantJoint (spring). There is a maxForce parameter you can specify to minimize the delay - make the spring more hard.
EDIT:
Also you can move your object directly specifying it's position to your finger position. But if this object will collide with something it will provide non-physical behavior because the velocity of the body will be zero.
So to move it correctly (if there will be collisions) you should specify it's velocity or acceleration (as mouse joint does). But to evaluate your finger velocity you will need some time and delay will remain.
Most of it has to do with latency in the hardware. If your timings are completely perfect, their will be 16ms of lag caused by the iPhone's GPU, ~20ms of lag from the touchscreen, and then how ever long the processing takes for your scene. So those add up to anywhere between 36-70ms of lag. Also, there is a small amount of damping applied in box2d on the mouse joint, for stability of the physics simulation.