I have been looking for example of terrain collision that is in C++ but i haven't found anything. I found one example that uses .RAW file it is in toymaker.info so don't mention it.
How do you detect collision between object can terrain against every triangle. I know it will be slow. How you detect collision between terrain and .X file?
Here is one method:
Draw a line from the origin through the center of a sphere you wish to test collision of. Now do a ray triangle intersection using the line, or ray, you have just created. You now know if the ray has passed through the triangle or not. If it has passed through the triangle then get the point it intersected the triangle and do a simple distance check with the sphere's center. If the distance is greater than the radius of the sphere a collision has not occurred. If it less then the sphere is intersecting the triangle.
Its also worth checking which side of the triangle the sphere's center is on. This is a simple dot product between the triangle's face normal and the sphere's center. a value of 0 indicate the sphere's center lies in the triangle and a positive result indicates its on one side and negative, the other.
You can also perform this calculation for a range of times. So if you know you are simulating a second and you know the velocity the sphere is moving at, then you can calculate the point in that second that the sphere intersected the triangle. This is pretty simple to calculate. If at the start the sphere is 4 units from the triangle and after a second it is 1 unit through the triangle. So in 1 second it has travelled 5 unit. Therefore the intersection must have happened 0.8 seconds into the movement. This way you can react to collisions of particles even when they are travelling so fast they would otherwise pass straight through the item they are colliding with.
If your .X file means DirectX format file, use DirectX Library to load .X meshes. Collision methods are also provided like D3DXIntersect.
Additional information : http://msdn.microsoft.com/en-us/library/windows/desktop/bb172882(v=vs.85).aspx
Related
I implemented frustum culling in my system, it tests the frustum planes on every object's bounding sphere, and it works great. (I find the PlaneVsAabb check unneeded)
However, the bounding sphere of the mesh is adjusted for its bind pose, so when the mesh starts moving (e.g the player attacks with his sword) some vertices could go out of the sphere.
This often results in a mesh getting culled, although there are some vertices that should be rendered (e.g the player's sword that went out of the sphere).
I could think of two possible solutions for this:
For every mesh in every frame, calculate its new bounding sphere based on bone changes. (I have no idea how to start with this...) Could this be too inefficient?
Add a fixed offset for every sphere radius (based on the entire mesh size maybe?), so there could be no chance of the mesh getting culled even when animated.
(1) would be inefficient in real-time yes. However you can do a mixture of both, by computing the largest possible bounding sphere statically i.e. when you load it. Using that in (2) would guarantee a better result than some arbitrary offset you make up.
(1) You can add locators to key elements (e.g. dummy bone on the tip of the sword) and transform their origin while animating. You can done it on CPU on each update and then calculate bounding box or bounding sphere. Or you can precompute bounding volumes for each frame of animation offline. Doom3 uses second approach.
I'm working on a physics simulation of projectiles, and I'm stuck on the ground collision. I have a ground made of quads,
I have the points stored in an array, so I thought that if I take the quad where the collision happen and calculate the angles of the quad (in x and z directions) I can then use that to change the velocity of the projectile.
This is where I'm stuck. I thought I should find the lowest and the highest point, then find the vector between them, but this will not give the angles in all directions, which is what I want. I know there must be a way of doing this, but how?
What you want is a normal of a quad.
Here's an answer that shows you how to get a quad's normal
After you got the normal, you need to calculate the force of the collision's response. Its direction is the normal of the quad and the strength is the strength the projectile exerts in the direction of the quad. Exerted force is calculated by using dot product of the projectile's velocity and reversed quad's normal (Here's a wiki link for the dot product)
The response vector should be this:
Vector3 responseForce = dot(projectile.vel, -1 * quad.normal) * quad.normal;
projectile.vel += responseForce;
I am looking for an efficient algorithm for mesh (set of triangles) and cone (given by origin, direction and angle from that direction) intersection. More precisely I want to find intersection point which is closest to the cone's origin. For now all what I can think about is to intersect a mesh with several rays from the cone origin and get the closest point. (Of course some spatial structure will be constructed for mesh to reject unnecessary intersections)
Also I found the following algo with brief description:
"Cone to mesh intersection is computed on the GPU by drawing the cone geometry with the mesh and reading the minimum depth value marking the intersection point".
Unfortunately it's implementation isn't obvious for me.
So can anyone suggest something more efficient than I have or explain in more details how it can be done on GPU using OpenGL?
on GPU I would do it like this:
set view
to cones origin
directing outwards
covering the bigest circle slice
for infinite cone use max Z value of mesh vertexes in view coordinate system
clear buffers
draw mesh
but in fragment shader draw only pixels intersecting cone
|fragment.xyz-screen_middle|=tan(cone_ang/2)*fragment.z
read z-buffer
read fragments and from valid (filled) select the closest one to cones origin
[notes]
if your gfx engine can handle also output values from your fragment shader
then you can skip bullet 4 and do the min distance search inside bullet 3 instead of rendering ...
that will speed up the process considerably (need just single xyz vector)
Here is another geometric problem:
I have created an 3-dimensional triangulated iso-surface of a point cloud using the marching cubes algorithm. Then I intersect this iso-surface with a plane and get a number of line segments that represent the contour lines of the intersection.
Is there any possibility to sort the vertices of these line segments clockwise so that I can draw them as a closed path and do a flood fill?
Thanks in advance!
It depends on how complex your isosurface is, but the simplest thing I can think of that might work is:
For each point, project to the plane. This will give you a set of points in 2d.
Make sure these are centered, via a translation to the centroid or center of the bounding box.
For each 2d point, run atan2 and get an angle. atan2 just puts things in the correct quadrant.
Order by that angle
If your isosurface/plane is monotonically increasing in angle around the centroid, then this will work fine. If not, then you might need to find the 2 nearest neighbors to each point in the plane, and hope that that makes a simple loop. In face, the simple loop idea might be simpler, because you don't need to project and you don't need to compute angles - just do everything in 3d.
What exactly is back face culling in OpenGL? Can you give me a specific example with e.g. one triangle?
If you look carefully you can see examples of this in a lot of video games. Any time the camera accidentally moves through an object - typically a moving object like a character - notice how the world continues to render correctly. That's because the back sides of the triangles that form the skin of the character are not rendered; they are effectively transparent. If this were not the case then every time the camera accidentally moved inside an object either the screen would go black (because the interior of the object is not lit) or you'd get a bizarre perspective on what the skin of the object looks like from the inside.
Back face culling is where the triangles pointing away from the camera/viewpoint are not considered for drawing.
Wikipedia defines this as:
It is a step in the graphical pipeline that tests whether the points in the polygon appear in clockwise or counter-clockwise order when projected onto the screen. If the user has specified that front-facing polygons have a clockwise winding, if the polygon projected on the screen has a counter-clockwise winding it has been rotated to face away from the camera and will not be drawn.
Other systems use the face normal and do the dot product with the view direction.
It is a relatively quick way of deciding whether to draw a triangle or not. Consider a cube. At any one time 3 of the sides of the cube are going to be facing away from the user and hence not visible. Even if these were drawn they would be obscured by the three "forward" facing sides. By performing back face culling you are reducing the number of triangles drawn from 12 to 6 (2 per side).
Back face culling works best with closed "solid" objects such as cubes, spheres, walls.
Some systems don't have this as they consider faces to be double sided and hence visible from either direction.
It's only and optimization technique.
When you look at a closed object, say a cube, you only see about half the faces : the faces that are towards you (or, at least, the faces that are not towards you are always occluded by another face that points towards you)
If you skip drawing all these backwards-facing faces, it will have two consequences :
- the rendering time will be twice better (on average)
- the final render won't change (since another, front-facing face will be drawn on top of a "culled" one)
So you basically get x2 perf for free.
In order to know whether the triangle is front- or back-facing, you take v0-v1 and v0-v2, make a cross product. This gives you the face normal. If this vector is towards you ( dot(normal, viewVector) < 0), draw.
Triangles have their coordinates specificed in a specific order, clockwise IIRC.
When the graphics engine look at a triangle from a specific direction, and the coordinates are counter-clockwise, it knows that it's looking at the backside of the triangle through an object. As the front side of the object is covering the triangle, it doesn't have to be drawn.