OpenGL find distance to a point - opengl

I have a virtual landscape with the ability to walk around in first-person. I want to be able to walk up any slope if it is 45 degrees or less. As far as I know, this involves translating your current position out x units then finding the distance between the translated point and the ground. If that distance is x units or more, the user can walk there. If not, the user cannot. I have no idea how to find the distance between one point and the nearest point in the negative y direction. I have programmed this in Java3D, but I do not know how to program this in OpenGL.

Barking this problem at OpenGL is barking up the wrong tree: OpenGL's sole purpose is drawing nice pictures to the screen. It's not a math library!
Depending you your demands there are several solutions. This is how I'd tackle this problem: The normals you calculate for proper shading give you the slope of each point. Say your heightmap (=terrain) is in the XY plane and your gravity vector g = -Z, then the normal force is terrain_normal(x,y) · g. The normal force is, what "pushes" your feet against the ground. Without sufficient normal force, there's not enough friction to convey your muscles force into a movement perpendicular to the ground. If you look at the normal force formula you can see that the more the angle between g and terrain_normal(x,y) deviates, the smaller the normal force.
So in your program you could simply test if the normal force exceeds some threshold; correctly you'd project the excerted friction force onto the terrain, and use that as acceleration vector.

If you just have a regular triangular hightmap you can use barycentric coordinates to interpolate Z values from a given (X,Y) position.

Related

Rendering an atmosphere around a planet with shading

I have a made a planet and wanted to make an atmosphere around it. So I was referring to this site:
Click to visit site
I don't understand this:
As with the lookup table proposed in Nishita et al. 1993, we can get the optical depth for the ray to the sun from any sample point in the atmosphere. All we need is the height of the sample point (x) and the angle from vertical to the sun (y), and we look up (x, y) in the table. This eliminates the need to calculate one of the out-scattering integrals. In addition, the optical depth for the ray to the camera can be figured out in the same way, right? Well, almost. It works the same way when the camera is in space, but not when the camera is in the atmosphere. That's because the sample rays used in the lookup table go from some point at height x all the way to the top of the atmosphere. They don't stop at some point in the middle of the atmosphere, as they would need to when the camera is inside the atmosphere.
Fortunately, the solution to this is very simple. First we do a lookup from sample point P to the camera to get the optical depth of the ray passing through the camera to the top of the atmosphere. Then we do a second lookup for the same ray, but starting at the camera instead of starting at P. This will give us the optical depth for the part of the ray that we don't want, and we can subtract it from the result of the first lookup. Examine the rays starting from the ground vertex (B 1) in Figure 16-3 for a graphical representation of this.
First Question - isn't optical depth dependent on how you see that is, on the viewing angle? If yes, the table just gives me the optical depth of the rays going from land to the top of the atmosphere in a straight line. So what about the case where the rays pierce the atmosphere to reach the camera? How do I get the optical depth in this case?
Second Question - What is the vertical angle it is talking about...like, is it the same as the angle with the z-axis as we use in polar coordinates?
Third Question - The article talks about scattering of the rays going to the sun..shouldn't it be the other way around? like coming from the sun to a point?
Any explanation on the article or on my questions will help a lot.
Thanks in advance!
I am no expert in the matter but have played with Atmospheric scattering and various physical and optical simulations. I strongly recommend to look at this:
my VEEERRRYYY Simplified version of atmospheric scattering in GLSL
It odes not do the full volume intergration but just linear path integration along the ray and does only the Rayleight scatering with isotropic coefficients. As you can see its still good enough.
In real scattering the viewing angle is impacting the real scattering equation as the scattering coefficients are different in different angles (against main light source and viewer) So answer to your first question is Yes it does.
Not sure what you are refer to in your second question. The scattering itself is dependent on angle between light source, particle and camera. That lies on arbitrary plane. However if the Earth surface is accounted to the equation too then its dependent on the horizontal and vertical angles (against terrain) so azimuth,elevation as usually more light is reflected when camera is facing sun (azimuth) and the reflected rays are closer to your elevation. So my guess is that's what the horizontal angle is about accounting for reflected light from the surface.
To answer your 3th question is called back ray tracing. You can cast rays both ways (from camera or from sun) however if you start from light source you do not know which way to go to hit a pixel on camera screen so you need to cast a lot of rays to increase the probability of hit enough to fill the screen which is too slow and inaccurate (produce holes). If you start from screen pixel then you cast just single or per wavelength ray instead which is much much faster. The resulting color is the same.
[Edit1] vertical angle
OK I read the linked topic a bit and this is How I understand it:
So its just angle between surface normal and the casted ray. Its scaled so vert.angle=0 means that ray and normal are the same and vert.angle=1 means the are opposite directions.

Angles of quad in 3D space

I'm working on a physics simulation of projectiles, and I'm stuck on the ground collision. I have a ground made of quads,
I have the points stored in an array, so I thought that if I take the quad where the collision happen and calculate the angles of the quad (in x and z directions) I can then use that to change the velocity of the projectile.
This is where I'm stuck. I thought I should find the lowest and the highest point, then find the vector between them, but this will not give the angles in all directions, which is what I want. I know there must be a way of doing this, but how?
What you want is a normal of a quad.
Here's an answer that shows you how to get a quad's normal
After you got the normal, you need to calculate the force of the collision's response. Its direction is the normal of the quad and the strength is the strength the projectile exerts in the direction of the quad. Exerted force is calculated by using dot product of the projectile's velocity and reversed quad's normal (Here's a wiki link for the dot product)
The response vector should be this:
Vector3 responseForce = dot(projectile.vel, -1 * quad.normal) * quad.normal;
projectile.vel += responseForce;

How to preserve the perception of angle between walls when the camera is rotating in OpenGL?

I am building a number of rooms with different shapes: Parallelogram, Rectangle, Rhombus, Square and etc. The viewer is supposed to look at the rooms from different corners, turn his head to right and left, and guess the shape of the room. So here “the perception of the angles between walls” is very important. My problems are these:
1) most of the acute angles seem to be 90 degrees from the distance,
2) the angles between walls as well as the length of the walls seem to change when the viewer turns his head left or right.
As I have read until now, it is the consequence of using Perspective projection; however, with Orthogonal projection I would have no depth (no perception of depth) in the screen and since I am inside of the room, the size of the room should be bigger than clipped area which produces a quite rubbish image.
I just want to know that is there any way to avoid or at least minimize this deformation effect? Should I build my own projection (something between glortho and glprospective)?
It is also worth mentioning that I use "glutlookatfunction" for positioning the camera and lookat points . The eye position is always one of the room corners and the initial lookat point is the opposite corner of the viewer.By pressing right and left arrow keys , the lookat point moves on the imaginary circle serrounding the room , just like the most of the OpenGL programs I have seen until now .Do you think it would be better if I move the lookat point on the walls ? Or rotate the room instead of changing the look at point ?
I added some pictures for better illustration of my problem :
This is my parallelogram room :
parallelogram.png
As you can see here , the acute angle ,which is supposed to be 60 degrees ,seems to be at least 90 degrees . And this is my rectangle room, which doesn't give you the sense of being in a rectangle room at all :
Rectangle.png
I'd say, without seeing the problem in action, that your camera FOV is too large.
From your comment, your camera has a 112.5° horizontal FOV, which I believe introduces 'unnatural distortions' on the edges of the screen : simple OpenGL perspective projection is linear and suffers from too wide FOVs.
You may want to take a look at this article comparing cameras to human eyes, as you want to jauge the perception of your users.
You should try to reduce the horizontal FOV of your camera, perhaps in the 80-90° range to see if it helps.
As a last resort, you could switch to non-linear projections, but you should make an educated choice before switching to this, there should be psychophysics research available (such as this) that may help you.

convert 2D screen coordinates to 3D space coordinates in C++?

I'm writing a 2D game using a wrapper of OpenGLES. There is a camera aiming at a bunch of textures, which are the sprites for the game. The user should be able to move the view around by moving their fingers around on the screen. The problem is, the camera is about 100 units away from the textures, so when the finger is slid across the screen to pan the camera, the sprites move faster than the finger due to parallax effect.
So basically, I need to convert 2D screen coordinates, to 3D coordinates at a specific z distance away (in my case 100 away because that's how far away the textures are).
There are some "Unproject" functions in C#, but I'm using C++ so I need the math behind this function. I'm extremely new to 3D stuff and I'm very bad at math so if you can explain like you are explaining to a 10 year old that would be much appreciated.
If I can do this, I can pan the camera at such a speed so it looks like the distant sprites are panning with the users finger.
For picking purposes, there are better ways than doing a reverse projection. See this answer: https://stackoverflow.com/a/1114023/252687
In general, you will have to scale your finger-movement-distance to use it in a far-away plane (z unit away).
i.e, it l unit is the amount of finger movement and if you want to find the effect z unit away, the length l' = l/z
But, please check the effect and adjust the l' (double/halve etc) to get the desired effect.
Found the answer at:
Wikipedia
It has the following formula:
To determine which screen x-coordinate corresponds to a point at Ax,Az multiply the point coordinates by:
where
Bx is the screen x coordinate
Ax is the model x coordinate
Bz is the focal length—the axial distance from the camera center to the image plane
Az is the subject distance.

point - plane collision without the glutLookAt* functions

As I have understood, it is recommended to use glTranslate / glRotate in favour of glutLootAt. I am not going to seek the reasons beyond the obvious HW vs SW computation mode, but just go with the wave. However, this is giving me some headaches as I do not exactly know how to efficiently stop the camera from breaking through walls. I am only interested in point-plane intersections, not AABB or anything else.
So, using glTranslates and glRotates means that the viewpoint stays still (at (0,0,0) for simplicity) while the world revolves around it. This means to me that in order to check for any intersection points, I now need to recompute the world's vertices coordinates (which was not needed with the glutLookAt approach) for every camera movement.
As there is no way in obtaining the needed new coordinates from GPU-land, they need to be calculated in CPU land by hand. For every camera movement ... :(
It seems there is the need to retain the current rotations aside each of the 3 axises and the same for translations. There is no scaling used in my program. My questions:
1 - is the above reasoning flawed ? How ?
2 - if not, there has to be a way to avoid such recalculations.
The way I see it (and by looking at http://www.glprogramming.com/red/appendixf.html) it needs one matrix multiplication for translations and another one for rotating (only aside the y axis needed). However, having to compute so many additions / multiplications and especially the sine / cosine will certainly be killing FPS. There are going to be thousands or even tens of thousands of vertices to compute on. Every frame... all the maths... After having computed the new coordinates of the world things seem to be very easy - just see if there is any plane that changed its 'd' sign (from the planes equation ax + by + cz + d = 0). If it did, use a lightweight cross products approach to test if the point is inside the space inside each 'moving' triangle of that plane.
Thanks
edit: I have found about glGet and I think it is the way to go but I do not know how to properly use it:
// Retains the current modelview matrix
//glPushMatrix();
glGetFloatv(GL_MODELVIEW_MATRIX, m_vt16CurrentMatrixVerts);
//glPopMatrix();
m_vt16CurrentMatrixVerts is a float[16] which gets filled with 0.f or 8.67453e-13 or something similar. Where am I screwing up ?
gluLookAt is a very handy function with absolutely no performance penalty. There is no reason not to use it, and, above all, no "HW vs SW" consideration about that. As Mk12 stated, glRotatef is also done on the CPU. The GPU part is : gl_Position = ProjectionMatrix x ViewMatrix x ModelMatrix x VertexPosition.
"using glTranslates and glRotates means that the viewpoint stays still" -> same thing for gluLookAt
"at (0,0,0) for simplicity" -> not for simplicity, it's a fact. However, this (0,0,0) is in the Camera coordinate system. It makes sense : relatively to the camera, the camera is at the origin...
Now, if you want to prevent the camera from going through the walls, the usual method is to trace a ray from the camera. I suspect this is what you're talking about ("to check for any intersection points"). But there is no need to do this in camera space. You can do this in world space. Here's a comparison :
Tracing rays in camera space : ray always starts from (0,0,0) and goes to (0,0,-1). Geometry must be transformed from Model space to World space, and then to Camera space, which is what annoys you
Tracing rays in world space : ray starts from camera position (in world space) and goes to (eyeCenter - eyePos).normalize(). Geometry must be transformed from Model space to World space.
Note that there is no third option (Tracing rays in Model space) which would avoid to transform the geometry from Model space to World space. However, you have a pair of workarounds :
First, your game's world is probably still : the Model matrix is probably always identity. So transforming its geometry from Model to World space is equivalent to doing nothing at all.
Secondly, for all other objets, you can take the opposite approach. Intead of transforming the entire geometry in one direction, transform only the ray the other way around : Take your Model matrix, inverse it, and you've got a matrix which goes from world space to model space. Multiply your ray's origin and direction by this matrix : your ray is now in model space. Intersect the normal way. Done.
Note that all I've said is standard techniques. No hacks or other weird stuff, just math :)