regarding the resolution and accurary in SDL - c++

I am writing a program with SDL. I setup the screen as 600x600. I then draw a circle on the windows. I randomly shoot the whole screen with points and finally count how many points are in the circle, which can used to estimated the area of the circle. But I found that if I initialize the window to bigger (like 1024*768) then the same method will lower the accuracy of the area calculation (to small extent but still not that small). I want to know increase the resolution of the windows doesn't help to improve the resolution of this issue? So how can I take advantage of the bigger resolution ?

I then draw a circle on the windows. I randomly shoot the whole screen with points and finally count how many points are in the circle, which can used to estimated the area of the circle.
Area of circle is equal to pi*(r^2). Where "r" is circle radius.
But I found that if I initialize the window to bigger
Your method will not produce reliable results unless random number generator is perfect It means that it must produce absolutely perfect uniform distribution of points, which is not going to happen. Also you'll need to know area of one "hit", which will be a big problem.
If you insist on reinventing the wheel (and avoid using pi*(r^2) for unknown reason), then instead of "shooting random points". simply scan image line by line, and calculate number of points that are inside the circle. It'll also probably much faster than trying to abuse pseudo-random number generator. And you can accelerate the process (by losing the precision) and instead of checking every pixel, check every 2nd pixel(and row), every 3rd, every 4th, and so on (perfect uniform distribution). It'll be much more reliable and predictable than your PRNG abuse.

Related

Linear Interpolation and Object Collision

I have a physics engine that uses AABB testing to detect object collisions and an animation system that does not use linear interpolation. Because of this, my collisions act erratically at times, especially at high speeds. Here is a glaringly obvious problem in my system...
For the sake of demonstration, assume a frame in our animation system lasts 1 second and we are given the following scenario at frame 0.
At frame 1, the collision of the objects will not bet detected, because c1 will have traveled past c2 on the next draw.
Although I'm not using it, I have a bit of a grasp on how linear interpolation works because I have used linear extrapolation in this project in a different context. I'm wondering if linear interpolation will solve the problems I'm experiencing, or if I will need other methods as well.
There is a part of me that is confused about how linear interpolation is used in the context of animation. The idea is that we can achieve smooth animation at low frame rates. In the above scenario, we cannot simply just set c1 to be centered at x=3 in frame 1. In reality, they would have collided somewhere between frame 0 and frame 1. Does linear interpolation automatically take care of this and allow for precise AABB testing? If not, what will it solve and what other methods should I look into to achieve smooth and precise collision detection and animation?
The phenomenon you are experiencing is called tunnelling, and is a problem inherent to discrete collision detection architectures. You are correct in feeling that linear interpolation may have something to do with the solution as it can allow you to, within a margin of error (usually), predict the path of an object between frames, but this is just one piece of a much larger solution. The terminology I've seen associated with these types of solutions is "Continuous Collision Detection". The topic is large and gets quite complex, and there are books that discuss it, such as Real Time Collision Detection and other online resources.
So to answer your question: no, linear interpolation on its own won't solve your problems*. Unless you're only dealing with circles or spheres.
What to Start Thinking About
The way the solutions look and behave are dependant on your design decisions and are generally large. So just to point in the direction of the solution, the fundamental idea of continuous collision detection is to figure out: How far between the early frame and the later frame does the collision happen, and in what position and rotation are the two objects at this point. Then you must calculate the configuration the objects will be in at the later frame time in response to this. Things get very interesting addressing these problems for anything other than circles in two dimensions.
I haven't implemented this but I've seen described a solution where you march the two candidates forward between the frames, advancing their position with linear interpolation and their orientation with spherical linear interpolation and checking with discrete algorithms whether they're intersecting (Gilbert-Johnson-Keerthi Algorithm). From here you continue to apply discrete algorithms to get the smallest penetration depth (Expanding Polytope Algorithm) and pass that and the remaining time between the frames, along to a solver to get how the objects look at your later frame time. This doesn't give an analytic answer but I don't have knowledge of an analytic answer for generalized 2 or 3D cases.
If you don't want to go down this path, your best weapon in the fight against complexity is assumptions: If you can assume your high velocity objects can be represented as a point things get easier, if you can assume the orientation of the objects doesn't matter (circles, spheres) things get easier, and it keeps going and going. The topic is beyond interesting and I'm still on the path of learning it, but it has provided some of the most satisfying moments in my programming period. I hope these ideas get you on that path as well.
Edit: Since you specified you're working on a billiard game.
First we'll check whether discrete or continuous is needed
Is any amount of tunnelling acceptable in this game? Not in billiards
no.
What is the speed at which we will see tunnelling? Using a 0.0285m
radius for the ball (standard American) and a 0.01s physics step, we
get 2.85m/s as the minimum speed that collisions start giving bad
response. I'm not familiar with the speed of billiard balls but that
number feels too low.
So just checking on every frame if two of the balls are intersecting is not enough, but we don't need to go completely continuous. If we use interpolation to subdivide each frame we can increase the velocity needed to create incorrect behaviour: With 2 subdivisions we get 5.7m/s, which is still low; 3 subdivisions gives us 8.55m/s, which seems reasonable; and 4 gives us 11.4m/s which feels higher than I imagine billiard balls are moving. So how do we accomplish this?
Discrete Collisions with Frame Subdivisions using Linear Interpolation
Using subdivisions is expensive so it's worth putting time into candidate detection to use it only where needed. This is another problem with a bunch of fun solutions, and unfortunately out of scope of the question.
So you have two candidate circles which will very probably collide between the current frame and the next frame. So in pseudo code the algorithm looks like:
dt = 0.01
subdivisions = 4
circle1.next_position = circle1.position + (circle1.velocity * dt)
circle2.next_position = circle2.position + (circle2.velocity * dt)
for i from 0 to subdivisions:
temp_c1.position = interpolate(circle1.position, circle1.next_position, (i + 1) / subdivisions)
temp_c2.position = interpolate(circle2.position, circle2.next_position, (i + 1) / subdivisions)
if intersecting(temp_c1, temp_c2):
intersection confirmed
no intersection
Where the interpolate signature is interpolate(start, end, alpha)
So here you have interpolation being used to "move" the circles along the path they would take between the current and the next frame. On a confirmed intersection you can get the penetration depth and pass the delta time (dt / subdivisions), the two circles, the penetration depth and the collision points along to a resolution step that determines how they should respond to the collision.

Does drawing a rectangle on the screen using RenderDrawRect take the same length of time as filling in every required pixel using RenderDrawPoint?

Let's say I wanted to draw a 50px by 60px rectangle in SDL2 starting from the point (0,0). Is it faster to call SDL_RenderDrawRect (renderer, SDL_Rect structure) than to fill in every individual pixel using a nested for loop and calling SDL_RenderDrawPoint?
Or do both operations take the same length of time (which is what I think would happen)? I tried looking at the SDL source code, although I had difficulty fully understanding the functions for rendering.
Yes, that would be my absolute expectation.
Even if there was no hardware acceleration going on, there's more overhead in doing one function call per pixel. Think of just computing the address inside the surface where each pixel is going to be written: the pixel-at-a-time needs to compute that fresh every time, while the rectangle code must likely can re-use the last value it computed for the vast majority of writes. These things matter.
But there very likely is hardware acceleration, so the difference in performance can be great.
Always use the most high-level API function you can, to give more leverage for optimization and acceleration.

Windows mouse coordinates VS OpenGL mouse coordinates

How can I determine(this isn't the right term to use I know) that, for every position of mouse in a window space, it gets converted to OGL space(-1, 1). In this case, the user moves the mouse very fast, that I assume all of its previous positions are converted into OGL coordinates. What I am trying to say is that...is a common CPU fast enough to do that (to track all previous events) even if my C++ OGL coordinates converter is very computational expensive? lets say I put very time consuming loops in there? or.. very fast method(). How can I assure that no OGL coordinates are skipped out if I move the mouse fast enough?
I'm not jumping to any conclusion here or assuming something else might you think.
Edit:
My program main loop is like this(pseudocode):
void Pollevents()
{
for everyt_obj in this
{
if Not Collide()
{
Move(x, y) //
}
}
}
void MousePos()
{
mouse.pos = To_OGL_Coord2f()
}
These are separate threads to be executed (But not actually a real thread)
Suppose mouse.pos = (0, 0) then I moved the mouse fast enough to make the new mouse.pos to (10, 10). In a single execution of a loop, the mouse position changed very far from where it was before. Now, how can I tell to my program, by implementing Bresenham's line algorithm as mentioned by Christian Rau, that those values generated by that algorithm(not being tracked) have been crossed by the mouse. Will I add another loop for that to step for all those positions?
How can I assure that no OGL coordinates are skipped out if I move the
mouse fast enough?
That's not possible, since there is no way to let the OS generate mouse events for each and every point a mouse move would have crossed when tracked with theoretically infinite precision.
The only way to ensure this is to fill the missing points between the two (possibly far away) mouse positions yourself. If you just want to draw a point for each position the mosue moved over (maybe using OpenGL), draw a line instead.
If you on the other hand need those intermediary mouse positions yourself for further computations, you won't get around computing them yourself using some common line rasterization algorithm (like the Bresenham Algorithm, the school book algorithm for line rasterization). What this basically does is compute each point on a discrete grid that a line from one point to another would have crossed (similar to what your graphics card does when converting a line into discrete pixels), so this will generate each discrete mouse position your virtual mouse path has crossed (ignoring any non-linear mouse movement between measurement points).
EDIT: If you don't need a discrete line with proper equal-width characteristics a much easier way than messing with line rasterization would also be to just work with floating point positions and do a simple linear interpolation of the end points, like datenwolf writes in his comment. This will also give you a better timing precision than discrete mouse positions. But it all depends on what you actually want to do with those mouse positions (and now would be a good way to tell us).
EDIT: From your updated question it looks like you need the mouse positions at a high granularity in order to compute the collision of the mouse with some objects. In this case you don't actually need the intermediary points at all. Just take the line from the current mouse position to the previous one (represented as just a pair of points, or whatever theoretical line representation) and compute the collision of the objects with that line instead of the individual points.

How to calculate the total area of all bodies/shapes on the screen

I'm trying to calculate the total area all bodies or shapes are occupying on the screen. I.e. if I have 2 circles, A and B, that intersect each other, I want to calculate the area that A union B covers (on the screen).
I've been reading through the chipmunk documentation and looked in the chipmunk API for a method that I might use, but I haven't found anything that I can use directly.
The only two methods I found, that might be useful, are these two: pointQueryFirst:layers:group: and segmentQueryFirstFrom:to:layers:group:
The way I was thinking was to:
Use the first method (pointQueryFirst) to go through all points on the screen and call this method. If a point doesn't have a shape in them, then accumulate it to a variable. Then divide that variable value with the area of the screen to get percentage of the screen that is free.
Or use the second method (segmentQueryFirstFrom), create an recursive algorithm that divides the screen in half and run the query on each half, if any half contains a shape, then divide that area into halves and check if those contains any shapes, and so on...
But I expect that in using them, the overall performance will suffer. Is there another solution that I can use? Another method that I haven't found? Any help is greatly appreciated.
Chipmunk isn't particularly going to be able to help you with that. The methods you mentioned will work, but be ridiculously slow.
I think I would do a good old fashioned occlusion query. Render the shapes into a texture or some sort of offscreen buffer and then count the pixels.

OpenGL GL_SELECT or manual collision detection?

As seen in the image
I draw set of contours (polygons) as GL_LINE_STRIP.
Now I want to select curve(polygon) under the mouse to delete,move..etc in 3D .
I am wondering which method to use:
1.use OpenGL picking and selection. ( glRenderMode(GL_SELECT) )
2.use manual collision detection , by using a pick-ray and check whether the ray is inside each polygon.
I strongly recommend against GL_SELECT. This method is very old and absent in new GL versions, and you're likely to get problems with modern graphics cards. Don't expect it to be supported by hardware - probably you'd encounter a software (driver) fallback for this mode on many GPUs, provided it would work at all. Use at your own risk :)
Let me provide you with an alternative.
For solid, big objects, there's an old, good approach of selection by:
enabling and setting the scissor test to a 1x1 window at the cursor position
drawing the screen with no lighting, texturing and multisampling, assigning an unique solid colour for every "important" entity - this colour will become the object ID for picking
calling glReadPixels and retrieving the colour, which would then serve to identify the picked object
clearing the buffers, resetting the scissor to the normal size and drawing the scene normally.
This gives you a very reliable "per-object" picking method. Also, drawing and clearing only 1 pixel with minimal per-pixel operation won't really hurt your performance, unless you are short on vertex processing power (unlikely, I think) or have really a lot of objects and are likely to get CPU-bound on the number of draw calls (but then again, I believe it's possible to optimize this away to a single draw call if you could pass the colour as per-pixel data).
The colour in RGB is 3 unsigned bytes, but it should be possible to additionally use the alpha channel of the framebuffer for the last byte, so you'd get 4 bytes in total - enough to store any 32-bit pointer to the object as the colour.
Alternatively, you can create a dedicated framebuffer object with a specific pixel format (like GL_R32UI, or even GL_RG32UI if you need 64 bits) for that.
The above is a nice and quick alternative (both in terms of reliability and in implementation time) for the strict geometric approach.
I found that on new GPUs, the GL_SELECT mode is extremely slow. I played with a few different ways of fixing the problem.
The first was to do a CPU collision test, which worked, but wasn't as fast as I would have liked. It definitely slows down when you are casting rays into the screen (using gluUnproject) and then trying to find which object the mouse is colliding with. The only way I got satisfactory speeds was to use an octree to reduce the number of collision tests down and then do a bounding box collision test - however, this resulted in a method that was not pixel perfect.
The method I settled on was to first find all the objects under the mouse (using gluUnproject and bounding box collision tests) which is usually very fast. I then rendered each of the objects that have potentially collided with the mouse in the backbuffer as a different color. I then used glReadPixel to get the color under the mouse, and map that back to the object. glReadPixel is a slow call, since it has to read from the frame buffer. However, it is done once per frame, which ends up taking a negligible amount of time. You can speed it up by rendering to a PBO if you'd like.
Giawa
umanga, Cant see how to reply inline... maybe I should sign up :)
First of all I must apologize for giving you the wrong algo - i did the back face culling one. But the one you need is very similar which is why I got confused... d'oh.
Get the camera position to mouse vector as said before.
For each contour, loop through all the coords in pairs (0-1, 1-2, 2-3, ... n-0) in it and make a vec out of them as before. I.e. walk the contour.
Now do the cross prod of those two (contour edge to mouse vec) instead of between pairs like I said before, do that for all the pairs and vector add them all up.
At the end find the magnitude of the resulting vector. If the result is zero (taking into account rounding errors) then your outside the shape - regardless of facing. If your interested in facing then instead of the mag you can do that dot prod with the mouse vector to find the facing and test the sign +/-.
It works because the algo finds the amount of distance from the vector line to each point in turn. As you sum them up and you are outside then they all cancel out because the contour is closed. If your inside then they all sum up. Its actually Gauss's Law of electromagnetic fields in physics...
See:http://en.wikipedia.org/wiki/Gauss%27s_law and note "the right-hand side of the equation is the total charge enclosed by S divided by the electric constant" noting the word "enclosed" - i.e. zero means not enclosed.
You can still do that optimization with the bounding boxes for speed.
In the past I've used GL_SELECT to determine which object(s) contributed the pixel(s) of interest and then used computational geometry to get an accurate intersection with the object(s) if required.
Do you expect to select by clicking the contour (on the edge) or the interior of the polygon? Your second approach sounds like you want clicks in the interior to select the tightest containing polygon. I don't think that GL_SELECT after rendering GL_LINE_STRIP is going to make the interior responsive to clicks.
If this was a true contour plot (from the image I don't think it is, edges appear to intersect) then a much simpler algorithm would be available.
You cant use select if you stay with the lines because you would have to click on the line pixels rendered not the space inside the lines bounding them which I read as what you wish to do.
You can use Kos's answer but in order to render the space you need to solid fill it which would involve converting all of your contours to convex types which is painful. So I think that would work sometimes and give the wrong answer in some cases unless you did that.
What you need to do is use the CPU. You have the view extents from the viewport and the perspective matrix. With the mouse coord, generate the view to mouse pointer vector. You also have all the coords of the contours.
Take the first coord of the first contour and make a vector to the second coord. Make a vector out of them. Take 3rd coord and make a vector from 2 to 3 and repeat all the way around your contour and finally make the last one from coord n back to 0 again. For each pair in sequence find the cross product and sum up all the results. When you have that final summation vector keep hold of that and do a dot product with the mouse pointer direction vector. If its +ve then the mouse is inside the contour, if its -ve then its not and if 0 then I guess the plane of the contour and the mouse direction are parallel.
Do that for each contour and then you will know which of them are spiked by your mouse. Its up to you which one you want to pick from that set. Highest Z ?
It sounds like a lot of work but its not too bad and will give the right answer. You might like to additionally keep bounding boxes of all your contours then you can early out the ones off of the mouse vector by doing the same math as for the full vector but only on the 4 sides and if its not inside then the contour cannot be either.
The first is easy to implement and widely used.