Angles of quad in 3D space - c++

I'm working on a physics simulation of projectiles, and I'm stuck on the ground collision. I have a ground made of quads,
I have the points stored in an array, so I thought that if I take the quad where the collision happen and calculate the angles of the quad (in x and z directions) I can then use that to change the velocity of the projectile.
This is where I'm stuck. I thought I should find the lowest and the highest point, then find the vector between them, but this will not give the angles in all directions, which is what I want. I know there must be a way of doing this, but how?

What you want is a normal of a quad.
Here's an answer that shows you how to get a quad's normal
After you got the normal, you need to calculate the force of the collision's response. Its direction is the normal of the quad and the strength is the strength the projectile exerts in the direction of the quad. Exerted force is calculated by using dot product of the projectile's velocity and reversed quad's normal (Here's a wiki link for the dot product)
The response vector should be this:
Vector3 responseForce = dot(projectile.vel, -1 * quad.normal) * quad.normal;
projectile.vel += responseForce;

Related

Modifying a texture on a mesh at given world coordinate

Im making an editor in which I want to build a terrain map. I want to use the mouse to increase/decrease terrain altitude to create mountains and lakes.
Technically I have a heightmap I want to modify at a certain texcoord that I pick out with my mouse. To do this I first go from screen coordinates to world position - I have done that. The next step, going from world position to picking the right texture coordinate puzzles me though. How do I do that?
If you are using a simple hightmap, that you use as a displacement map in lets say the y direction. The base mesh lays in the xz plain (y=0).
You can discard the y coordinate from world coordinate that you have calculated and you get the point on the base mesh. From there you can map it to texture space the way, you map your texture.
I would not implement it that way.
I would render the scene to a framebuffer and instead of rendering a texture the the mesh, colorcode the texture coordinate onto the mesh.
If i click somewhere in screen space, i can simple read the pixel value from the framebuffer and get the texture coordinate directly.
The rendering to the framebuffer should be very inexpensive anyway.
Assuming your terrain is a simple rectangle you first calculate the vector between the mouse world position and the origin of your terrain. (The vertex of your terrain quad where the top left corner of your height map is mapped to). E.g. mouse (50,25) - origin(-100,-100) = (150,125).
Now divide the x and y coordinates by the world space width and height of your terrain quad.
150 / 200 = 0.75 and 125 / 200 = 0.625. This gives you the texture coordinates, if you need them as pixel coordinates instead simply multiply with the size of your texture.
I assume the following:
The world coordinates you computed are those of the mouse pointer within the view frustrum. I name them mouseCoord
We also have the camera coordinates, camCoord
The world consists of triangles
Each triangle point has texture coordiantes, those are interpolated by barycentric coordinates
If so, the solution goes like this:
use camCoord as origin. Compute the direction of a ray as mouseCoord - camCoord.
Compute the point of intersection with a triangle. Naive variant is to check for every triangle if it is intersected, more sophisticated would be to rule out several triangles first by some other algorithm, like parting the world in cubes, trace the ray along the cubes and only look at the triangles that have overlappings with the cube. Intersection with a triangle can be computed like on this website: http://www.lighthouse3d.com/tutorials/maths/ray-triangle-intersection/
Compute the intersection points barycentric coordinates with respect to that triangle, like that: https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-rendering-a-triangle/barycentric-coordinates
Use the barycentric coordinates as weights for the texture coordinates of the corresponding triangle points. The result are the texture coordinates of the intersection point, aka what you want.
If I misunderstood what you wanted, please edit your question with additional information.
Another variant specific for a height map:
Assumed that the assumptions are changed like that:
The world has ground tiles over x and y
The ground tiles have height values in their corners
For a point within the tile, the height value is interpolated somehow, like by bilinear interpolation.
The texture is interpolated in the same way, again with given texture coordinates for the corners
A feasible algorithm for that (approximative):
Again, compute origin and direction.
Wlog, we assume that the direction has a higher change in x-direction. If not, exchange x and y in the algorithm.
Trace the ray in a given step length for x, that is, in each step, the x-coordinate changes by that step length. (take the direction, multiply it with step size divided by it's x value, add that new direction to the current position starting at the origin)
For your current coordinate, check whether it's z value is below the current height (aka has just collided with the ground)
If so, either finish or decrease step size and do a finer search in that vicinity, going backwards until you are above the height again, then maybe go forwards in even finer steps again et cetera. The result are the current x and y coordinates
Compute the relative position of your x and y coordinates within the current tile. Use that for weights for the corner texture coordinates.
This algorithm can theoretically jump over very thin tops. Choose a small enough step size to counter that. I cannot give an exact algorithm without knowing what type of interpolation the height map uses. Might be not the worst idea to create triangles anyway, out of bilinear interpolated coordinates maybe? In any case, the algorithm is good to find the tile in which it collides.
Another variant would be to trace the ray over the points at which it's x-y-coordinates cross the tile grid and then look if the z coordinate went below the height map. Then we know that it collides in this tile. This could produce a false negative if the height can be bigger inside the tile than at it's edges, as certain forms of interpolation can produce, especially those that consider the neighbour tiles. Works just fine with bilinear interpolation, though.
In bilinear interpolation, the exact intersection can be found like that: Take the two (x,y) coordinates at which the grid is crossed by the ray. Compute the height of those to retrieve two (x,y,z) coordinates. Create a line out of them. Compute the intersection of that line with the ray. The intersection of those is that of the intersection with the tile's height map.
Simplest way is to render the mesh as a pre-pass with the uvs as the colour. No screen to world needed. The uv is the value at the mouse position. Just be careful though with mips/filtering etv

C++ raytracer and normalizing vectors

So far my raytracer:
Sends out a ray and returns a new vector if collision with sphere
was made
Pixel color is then added based on the color of the sphere[id] it collided with.
repeats for all spheres in scene description.
For this example, lets say:
sphere[0] = Light source
sphere[1] = My actual sphere
So now, inside my nested resolution for loops, I have a returned vector that gives me the xyz coordinates of the current ray's collision with sphere[1].
I now want to send a new ray from this collision vector position to the vector position of the light source sphere[0] so I can update the pixel's color based off this light's color / emission.
I have read that I should normalize the two points, and first check if they point in opposite directions. If so, don't worry about this calculation because it's in the light's shadow.
So my question is, given two un-normalized vectors, how can I detect if their normalized unit's are pointing in opposite directions? And with a point light like this, how could that works since each point on the light sphere has a different normal direction? This concept makes much more sense with a directional light.
Also, after I run this check, should I do my shading calculations based off the two normal angles in relationship to each other, or should I send out a new ray towards the lighsource and continue from there?
You can use the dot product of the 2 vectors, that would be negative if they are in the opposite direction, ie the projection of one vector onto another is going in the opposite direction
For question 1, I think you want the dot product between the vectors.
u.v = x1*x2 + y1*y2 + z1*z2
If u.v > 0 then the angle between them is acute.
if u.v < 0 then the angle between them is obtuse.
if 0.v == 0 they point at exactly 90 degree angle.
But what I think you really mean is not to normalize the vectors, but to compute the dot product between the angle of the normal of the surface of the sphere at your collision xyz to the angle from your light source to the same xyz.
So if the sphere has center at xs, ys, zs, and the light source is at xl, yl, zl, and the collision is at xyz then
vector 1 is x-xs, y-ys, z-zs and
vector 2 is xl-x, yl-y, zl-z
if the dot product between these is < 0 then the light ray hit the opposite side of the sphere and can be discarded.
Once you know this light ray hit the sphere on the non-shadowed side, I think you need to do the same calculation for the eye point, depending on the location of the light source and the viewpoint. If the eye point and the light source are the same point, then the value of that dot product can be used in the shading calculation.
If the eye and light are at different positions the light could hit a point the eye can't see (and will be in shadow and thus ambient illumination if any), so you need to do the same vector calculation replacing the light source coordinate with the eye point coordinate, and once again if the dot product is < 0 it is visible.
Then, compute the shading based on the dot product of the vector from eye, to surface, and surface to light.
OK, someone else came along and edited the question while I was writing this, I hope the answer is still clear.

Relative rotation of OpenGL Camera

I am currently struggling in finding a formula to rotate my OpenGL "Camera" (I tried do do it via a scene rotation, but have the same issue).
Basically my Camera is at a given position, looking a given point (all indicated to gluLookAt) and I would like to rotate the camera upwards for example, and still looking at the same point.
What should be the right process ?
What input data should I take to decide the amount of movement ? 2D mouse coordinates evolution or 3D unprojected mouse coordinates evolution ?
The trick is to see that a camera-rotation is the same as a scene rotation if you do it at the correct position. Move the camera into the point around which you want to rotate, then rotate the camera, then move back out by the same distance you moved in.
The amount by which you rotate depends on your application. Take G-Earth as an example: if you are close to the surface the rotation is (absolute) small, if you are far from the surface it is large.
If you're creating orbiting(oribitng around LookAt) camera for openGL I sugest you make it with these data:
LookAtPosition- 3D vector
CamUp - 3D unit vector
RelativeCamPosition - 3D unit vector
CamDistance - decimal number
LookAtPosition is a point on which you'll be looking. CamUp is vector that points up from camera, you can see it on this image. It's best to initialize camera at no rotation, so that CamUp = [0,1,0]. Note that it's unit vector so it's magnitude/size/length is always 1. RelativeCamPosition is again unit vector. You get it by taking LookAt to Camera
vector and dividing by it's magnitude, which you'll save in CamDistance. In intialized state it might look as this:
LookAtPosition = [0,0,0]
CamUp = [0,1,0]
RelativeCamPosition = [1,0,0]
CamDistance = 10
You can now get camera position by
CamPosition = LookAtPosition + RelativeCamPosition * CamDistance
But you need to rotate that camera arround right? Well there's a reason for unit vectors - they are easy to use in calculations. I believe you use angles for rotating so you need to use only sine and cosine. Rotate function might look like this:
Rotate(angleX, angleY){
RelativeCamPosition.x = sin(angleX)*cos(angleY);
RelativeCamPosition.z = cos(angleX)*cos(angleY);
RelativeCamPosition.y = sin(angleY);
}
where angleX and angleY are absolute (NOT RELATIVE) rotations in horizontal and vertical direction. You should always use absolute roations because there can be floating point errors while adding. Anyway I just made those calculations on scrap of paper so I hope they're allright.
Edit: I've just noticed that this will work just if your intiial state is like I wrote RelativeCamPosition = [1,0,0]. However it shouldn't be hard to edit them so it works for arbirtary initial state.

OpenGL find distance to a point

I have a virtual landscape with the ability to walk around in first-person. I want to be able to walk up any slope if it is 45 degrees or less. As far as I know, this involves translating your current position out x units then finding the distance between the translated point and the ground. If that distance is x units or more, the user can walk there. If not, the user cannot. I have no idea how to find the distance between one point and the nearest point in the negative y direction. I have programmed this in Java3D, but I do not know how to program this in OpenGL.
Barking this problem at OpenGL is barking up the wrong tree: OpenGL's sole purpose is drawing nice pictures to the screen. It's not a math library!
Depending you your demands there are several solutions. This is how I'd tackle this problem: The normals you calculate for proper shading give you the slope of each point. Say your heightmap (=terrain) is in the XY plane and your gravity vector g = -Z, then the normal force is terrain_normal(x,y) ยท g. The normal force is, what "pushes" your feet against the ground. Without sufficient normal force, there's not enough friction to convey your muscles force into a movement perpendicular to the ground. If you look at the normal force formula you can see that the more the angle between g and terrain_normal(x,y) deviates, the smaller the normal force.
So in your program you could simply test if the normal force exceeds some threshold; correctly you'd project the excerted friction force onto the terrain, and use that as acceleration vector.
If you just have a regular triangular hightmap you can use barycentric coordinates to interpolate Z values from a given (X,Y) position.

point - plane collision without the glutLookAt* functions

As I have understood, it is recommended to use glTranslate / glRotate in favour of glutLootAt. I am not going to seek the reasons beyond the obvious HW vs SW computation mode, but just go with the wave. However, this is giving me some headaches as I do not exactly know how to efficiently stop the camera from breaking through walls. I am only interested in point-plane intersections, not AABB or anything else.
So, using glTranslates and glRotates means that the viewpoint stays still (at (0,0,0) for simplicity) while the world revolves around it. This means to me that in order to check for any intersection points, I now need to recompute the world's vertices coordinates (which was not needed with the glutLookAt approach) for every camera movement.
As there is no way in obtaining the needed new coordinates from GPU-land, they need to be calculated in CPU land by hand. For every camera movement ... :(
It seems there is the need to retain the current rotations aside each of the 3 axises and the same for translations. There is no scaling used in my program. My questions:
1 - is the above reasoning flawed ? How ?
2 - if not, there has to be a way to avoid such recalculations.
The way I see it (and by looking at http://www.glprogramming.com/red/appendixf.html) it needs one matrix multiplication for translations and another one for rotating (only aside the y axis needed). However, having to compute so many additions / multiplications and especially the sine / cosine will certainly be killing FPS. There are going to be thousands or even tens of thousands of vertices to compute on. Every frame... all the maths... After having computed the new coordinates of the world things seem to be very easy - just see if there is any plane that changed its 'd' sign (from the planes equation ax + by + cz + d = 0). If it did, use a lightweight cross products approach to test if the point is inside the space inside each 'moving' triangle of that plane.
Thanks
edit: I have found about glGet and I think it is the way to go but I do not know how to properly use it:
// Retains the current modelview matrix
//glPushMatrix();
glGetFloatv(GL_MODELVIEW_MATRIX, m_vt16CurrentMatrixVerts);
//glPopMatrix();
m_vt16CurrentMatrixVerts is a float[16] which gets filled with 0.f or 8.67453e-13 or something similar. Where am I screwing up ?
gluLookAt is a very handy function with absolutely no performance penalty. There is no reason not to use it, and, above all, no "HW vs SW" consideration about that. As Mk12 stated, glRotatef is also done on the CPU. The GPU part is : gl_Position = ProjectionMatrix x ViewMatrix x ModelMatrix x VertexPosition.
"using glTranslates and glRotates means that the viewpoint stays still" -> same thing for gluLookAt
"at (0,0,0) for simplicity" -> not for simplicity, it's a fact. However, this (0,0,0) is in the Camera coordinate system. It makes sense : relatively to the camera, the camera is at the origin...
Now, if you want to prevent the camera from going through the walls, the usual method is to trace a ray from the camera. I suspect this is what you're talking about ("to check for any intersection points"). But there is no need to do this in camera space. You can do this in world space. Here's a comparison :
Tracing rays in camera space : ray always starts from (0,0,0) and goes to (0,0,-1). Geometry must be transformed from Model space to World space, and then to Camera space, which is what annoys you
Tracing rays in world space : ray starts from camera position (in world space) and goes to (eyeCenter - eyePos).normalize(). Geometry must be transformed from Model space to World space.
Note that there is no third option (Tracing rays in Model space) which would avoid to transform the geometry from Model space to World space. However, you have a pair of workarounds :
First, your game's world is probably still : the Model matrix is probably always identity. So transforming its geometry from Model to World space is equivalent to doing nothing at all.
Secondly, for all other objets, you can take the opposite approach. Intead of transforming the entire geometry in one direction, transform only the ray the other way around : Take your Model matrix, inverse it, and you've got a matrix which goes from world space to model space. Multiply your ray's origin and direction by this matrix : your ray is now in model space. Intersect the normal way. Done.
Note that all I've said is standard techniques. No hacks or other weird stuff, just math :)