OpenGL - "ultra smooth" animation of simple horizontally moving object - c++

I just want to do a simple animation (for example in C++ using OpenGL) of some moving object - let's say simple horizontal movement of a square from left to right.
In OpenGL, I can use "double-buffering" method and let's say a user (running my application with the animation) have turned "vertical sync" on - so I can call some function every time when a monitor refreshes itself (I can achieve that for example using Qt toolkit and its function "swapBuffers").
So, I think, the "smoothest" animation that I can achieve, is to "move the square by for example 1 pixel (can be other values) every time monitor refreshes", so at each "frame" the square is 1 pixel further - "I HAVE TESTED THIS, AND IT SURELY WORKS SMOOTHLY".
But the problem arises when I want to have "separate" thread for "game logic" (moving the square by 1 pixel to the right) and for "animation" (displaying current position of the square on the screen). Because let's say the game logic thread is a while loop where I move the square by 1 pixel and then "sleep" the thread for some time, for example 10 milliseconds, and my monitor refreshes for example every 16 milliseconds - the movement of the square "won't be 100% smooth" because sometimes the monitor will refresh two times where the square moves by only by 1 pixel and not by 2 pixels (because there two "different" frequencies of monitor and game logic thread) - and the movement will look "little jerky".
So, logically, I could stay with the first super smooth method, but, it cannot be used in for example "multiplayer" (for example "server-client") games - because different computers have different monitor frequencies (so I should use different threads for game logic (on the server) and for animation (on the clients) ).
So my question is:
Is there some method, using different threads for game logic and animation, which do "100% smooth" animation of some moving object and if some exists, please describe it here, or when I just had some "more complex scene to render", I just would not see that "little jerky movement" which I see now, when I move some simple square horizontally, and I deeply concentrate on it :) ?

Well, this is actually typical separate game-loop behavior. You manage all you physics (movement) related actions in one thread, letting the render thread to do its work. This is actually desirable.
Don´t forget this way of implementation of game loop is to have maximum available frame rate while preserving constant physics speed. At higher FPS, you can not see this effect by any chance, if there is not any other code related problem. Some hooking between framerate and physics for example.
If you want to achieve what you describe as perfect smoothness, you could synchronize your physics engine with VSync. Simply do all your physics BEFORE refresh kicks in, than wait for another.
But this all applies to constant speed objects. If you have object with dynamic speed, you can never know when to draw it to be "in sync". Same problem arises then you want multiple object with different constant speeds.
Also, this is NOT what you want in complex scenes. The whole idea of V-sync is to limit screen tearing effect. You should definitely NOT hook your physics or rendering code to display refresh rate. You want you physics code to run independent of users display refresh rate. This could be REAL pain in multiplayer games for example. For start, look at this page: How A Game Loop Works
EDIT:
I say your vision of perfect smoothness is unrealistic. You can mask it using techniques Kevin wrote. But you will always struggle with HW limits as refresh rate, or display pixelation. For example, you have window of 640x480 px. Now, you want your object to move horizontally. You can move your object by vector heading towards bottom right corner, BUT you must increment object coordinates by float number (640/480). But in rendering, you go to integers. So your object moves jagged. No way around this. In small speed, you can notice it. You can blur it, or make it move faster, but never get rid of it...

Allow your object to move by fractions of a pixel. In OpenGL, this can be done for your example of a square by drawing the square onto a texture (i.e. a one-pixel or larger border), rather than letting it be just the polygon edge. If you are rendering 2D sprite graphics, then you get this pretty much automatically (but if you have 1:1 pixel art it will be blurred/sharp/blurred as it crosses pixel boundaries).
Smooth (antialias) the polygon edge (GL_POLYGON_SMOOTH). The problem with this technique is that it does not work with Z-buffer-based rendering since it causes transparency, but if you are doing a 2D scene you can make sure to always draw back-to-front.
Enable multisample/supersample antialiasing, which is more expensive but doesn't have the above problem.
Make your object have a sufficiently animated appearance that the pixel shifts aren't easy to notice because there's much more going on at that edge (i.e. it is itself moving in place at much more than 1 pixel/frame).
Make your game sufficiently complex and engrossing that players are distracted from looking at the pixels. :)

Related

Does drawing a rectangle on the screen using RenderDrawRect take the same length of time as filling in every required pixel using RenderDrawPoint?

Let's say I wanted to draw a 50px by 60px rectangle in SDL2 starting from the point (0,0). Is it faster to call SDL_RenderDrawRect (renderer, SDL_Rect structure) than to fill in every individual pixel using a nested for loop and calling SDL_RenderDrawPoint?
Or do both operations take the same length of time (which is what I think would happen)? I tried looking at the SDL source code, although I had difficulty fully understanding the functions for rendering.
Yes, that would be my absolute expectation.
Even if there was no hardware acceleration going on, there's more overhead in doing one function call per pixel. Think of just computing the address inside the surface where each pixel is going to be written: the pixel-at-a-time needs to compute that fresh every time, while the rectangle code must likely can re-use the last value it computed for the vast majority of writes. These things matter.
But there very likely is hardware acceleration, so the difference in performance can be great.
Always use the most high-level API function you can, to give more leverage for optimization and acceleration.

Collision resolution issues with circles

I have a small application I have built where there are a few balls on a blank background. They all start flying through the air and use the physics I wrote to bounce accurately and have realistic collision responses. I am satisfied with how it looks except I have an issue where when my balls land directly on top of each other, the attach together and float directly up.
Here are the functions involved
https://gist.github.com/anonymous/899d6fb255a85d8f2102
Basically if the Collision function returns true, I use the ResolveCollision to change their velocities accordingly.
I believe the issue is from the slight re-positioning I do in ResolveCollision(). If they collide I bring them a frame or so backwards in location so that they are not intersecting still the next frame. However, when they are directly on top they bounce off eachother at such small bounces that eventually stepping back a frame isn't enough to unhook them.
I'm unsure if this is the problem and if it is, then what to do about it.
Any help would be awesome!
The trick is to ignore the collision if the circles are moving away from each other. This works so long as your timestep is small enough relative to their velocities (i.e. the circles can't pass through each other in a single frame).
When the circles first collide, you will adjust their velocity vectors so their relative velocity vector pushes them apart (because a collision will do that). After that, any further collisions are spurious because the circles will be moving apart, and will eventually separate completely. So, just ignore collisions between objects that are moving apart, and you should be fine.
(I've implemented such an algorithm in a 3D screensaver I wrote, but the algorithm is entirely dimension-agnostic and so would work fine for 2D circles).

can i access a camera frame in two functions running in parallel?

i am working on face detection - recognition project in opencv c++ , the code works really slow , there is a lag between the real camera feed and the processed feed , i dont want that lag to be visible to the user .
so can i have a function which just reads a frame from camera and displays it . and all the detection/recognition work can be done on other functions running in parallel ?
also i want my result to be visible on the screen ( a box around the face with necessary details) so can i transfer this data across functions . can i create a vector of Rect datatype which contains all these rectangle data , which can be accessed by all the functions to push new faces and to display them?
i am just searching for a solution to this problem , i know little about parallel computing , if there is any other alternative please give details
thanks
Rishi
Yes, you need to run face detection and recognition code in a separate thread. First you need to copy frame to use it on another thread.
Using vector of Rect will be convinient. But you need to lock mutex when you use vector to prevent problems with parallel access to the same data. And you need to lock mutex while copying frame.
I should note that if your face detection and recognition code runs very slowly, it will never give you up-to-date result: rectangles will be displaced.
First of all note one thing - there will be always some lag. Even if you just display image video from the camera (without any processing) it will be a bit delayed.
It's also important to optimize the process of face detection, parallel computing won't fix all you problems. Here i've written a bit about that (but it's mostly about eye detection within face). Anther technique which it's worth trying is checking whether region (part of image) in which you have found face in last frame have changed or not. General idea is quite simple - subtract region of new (actual) frame from the same region of old (previous one) frame. Then on the result image use binary threshold operation (you need to find threshold value on you own by trying different values - i'm not sure, but i think that i've used something about 30 - don't use too small value, because there is always some difference between two frames, because of noise and little changes in lighting etc). Then count all non-zero pixels and divide this number by number off all pixels of this region ( = width * height ) and multiply by 100. This number will be percentage of changed pixels. If this value is small, you don't have analyze current frame, you can just assume that results of analysis from previous frame are still actual and correct. Note that this technique is working fine only if background isn't changing quickly (like for example trees or water).

Simulating black hole / whirpool behaviour for sprites

One of the powerups in my game is a vortex that attracts all coins. I know I have any cocos2d's moveto/bezierto methods available, but I don't know how to make them have tangential and radial speed.
The extra difficulty is that the vortex center can change in every step, so all movement has to be readjusted.
One way to achieve this without a physics engine is to use the rotation around point algorithm.
That covers the rotation around the vortex center. Once an object is rotation around the vortex, all you need to do is to reduce that object's distance from the center by a certain amount every frame. That way it will continue to move inwards.
The only tricky part then is to get the object from its initial position being "sucked into" the vortex. There's going to be a lot of tweaking needed. With a physics engine, that part would come natural from the physics itself and it would always look right.
This is not guaranteed for the manual solution and definitely not for actions, which aren't designed to track moving targets. For example, if you change a move action every frame by replacing the existing one with a new one, your object won't move at all. Every time you do that, there's a 1-frame delay before the new action does its work.

OpenGL GL_SELECT or manual collision detection?

As seen in the image
I draw set of contours (polygons) as GL_LINE_STRIP.
Now I want to select curve(polygon) under the mouse to delete,move..etc in 3D .
I am wondering which method to use:
1.use OpenGL picking and selection. ( glRenderMode(GL_SELECT) )
2.use manual collision detection , by using a pick-ray and check whether the ray is inside each polygon.
I strongly recommend against GL_SELECT. This method is very old and absent in new GL versions, and you're likely to get problems with modern graphics cards. Don't expect it to be supported by hardware - probably you'd encounter a software (driver) fallback for this mode on many GPUs, provided it would work at all. Use at your own risk :)
Let me provide you with an alternative.
For solid, big objects, there's an old, good approach of selection by:
enabling and setting the scissor test to a 1x1 window at the cursor position
drawing the screen with no lighting, texturing and multisampling, assigning an unique solid colour for every "important" entity - this colour will become the object ID for picking
calling glReadPixels and retrieving the colour, which would then serve to identify the picked object
clearing the buffers, resetting the scissor to the normal size and drawing the scene normally.
This gives you a very reliable "per-object" picking method. Also, drawing and clearing only 1 pixel with minimal per-pixel operation won't really hurt your performance, unless you are short on vertex processing power (unlikely, I think) or have really a lot of objects and are likely to get CPU-bound on the number of draw calls (but then again, I believe it's possible to optimize this away to a single draw call if you could pass the colour as per-pixel data).
The colour in RGB is 3 unsigned bytes, but it should be possible to additionally use the alpha channel of the framebuffer for the last byte, so you'd get 4 bytes in total - enough to store any 32-bit pointer to the object as the colour.
Alternatively, you can create a dedicated framebuffer object with a specific pixel format (like GL_R32UI, or even GL_RG32UI if you need 64 bits) for that.
The above is a nice and quick alternative (both in terms of reliability and in implementation time) for the strict geometric approach.
I found that on new GPUs, the GL_SELECT mode is extremely slow. I played with a few different ways of fixing the problem.
The first was to do a CPU collision test, which worked, but wasn't as fast as I would have liked. It definitely slows down when you are casting rays into the screen (using gluUnproject) and then trying to find which object the mouse is colliding with. The only way I got satisfactory speeds was to use an octree to reduce the number of collision tests down and then do a bounding box collision test - however, this resulted in a method that was not pixel perfect.
The method I settled on was to first find all the objects under the mouse (using gluUnproject and bounding box collision tests) which is usually very fast. I then rendered each of the objects that have potentially collided with the mouse in the backbuffer as a different color. I then used glReadPixel to get the color under the mouse, and map that back to the object. glReadPixel is a slow call, since it has to read from the frame buffer. However, it is done once per frame, which ends up taking a negligible amount of time. You can speed it up by rendering to a PBO if you'd like.
Giawa
umanga, Cant see how to reply inline... maybe I should sign up :)
First of all I must apologize for giving you the wrong algo - i did the back face culling one. But the one you need is very similar which is why I got confused... d'oh.
Get the camera position to mouse vector as said before.
For each contour, loop through all the coords in pairs (0-1, 1-2, 2-3, ... n-0) in it and make a vec out of them as before. I.e. walk the contour.
Now do the cross prod of those two (contour edge to mouse vec) instead of between pairs like I said before, do that for all the pairs and vector add them all up.
At the end find the magnitude of the resulting vector. If the result is zero (taking into account rounding errors) then your outside the shape - regardless of facing. If your interested in facing then instead of the mag you can do that dot prod with the mouse vector to find the facing and test the sign +/-.
It works because the algo finds the amount of distance from the vector line to each point in turn. As you sum them up and you are outside then they all cancel out because the contour is closed. If your inside then they all sum up. Its actually Gauss's Law of electromagnetic fields in physics...
See:http://en.wikipedia.org/wiki/Gauss%27s_law and note "the right-hand side of the equation is the total charge enclosed by S divided by the electric constant" noting the word "enclosed" - i.e. zero means not enclosed.
You can still do that optimization with the bounding boxes for speed.
In the past I've used GL_SELECT to determine which object(s) contributed the pixel(s) of interest and then used computational geometry to get an accurate intersection with the object(s) if required.
Do you expect to select by clicking the contour (on the edge) or the interior of the polygon? Your second approach sounds like you want clicks in the interior to select the tightest containing polygon. I don't think that GL_SELECT after rendering GL_LINE_STRIP is going to make the interior responsive to clicks.
If this was a true contour plot (from the image I don't think it is, edges appear to intersect) then a much simpler algorithm would be available.
You cant use select if you stay with the lines because you would have to click on the line pixels rendered not the space inside the lines bounding them which I read as what you wish to do.
You can use Kos's answer but in order to render the space you need to solid fill it which would involve converting all of your contours to convex types which is painful. So I think that would work sometimes and give the wrong answer in some cases unless you did that.
What you need to do is use the CPU. You have the view extents from the viewport and the perspective matrix. With the mouse coord, generate the view to mouse pointer vector. You also have all the coords of the contours.
Take the first coord of the first contour and make a vector to the second coord. Make a vector out of them. Take 3rd coord and make a vector from 2 to 3 and repeat all the way around your contour and finally make the last one from coord n back to 0 again. For each pair in sequence find the cross product and sum up all the results. When you have that final summation vector keep hold of that and do a dot product with the mouse pointer direction vector. If its +ve then the mouse is inside the contour, if its -ve then its not and if 0 then I guess the plane of the contour and the mouse direction are parallel.
Do that for each contour and then you will know which of them are spiked by your mouse. Its up to you which one you want to pick from that set. Highest Z ?
It sounds like a lot of work but its not too bad and will give the right answer. You might like to additionally keep bounding boxes of all your contours then you can early out the ones off of the mouse vector by doing the same math as for the full vector but only on the 4 sides and if its not inside then the contour cannot be either.
The first is easy to implement and widely used.