When working with openGL perspectives I am not quite sure how to compute the fovx from the fovy. I read different things in different places and in I don't get the right behavior by using either method that I found. So can someone please tell me, given the fovy of the camera and the aspect ratio, how to I calculate the fovx of the camera? If any other data is needed that is fine just let me know what I need. Thank you so much in advance!
Correct:
fieldOfViewX = 2 * atan(tan(fieldOfViewY * 0.5) * aspect)
Wrong, especially for large aspects, see #gman's comments below:
Aspect ratio === width/height
fovy ~= "height"
==> fovx = fovy * aspect
Test:
Fovy = 60 degs
Aspect = 4:3 ~= 1.33
Fovx = 60*1.33 = 80
80/60 = 4:3 (fovs match, yay)
For "reasonable" fovs/aspects, simple method is "reasonably" near the truth, but if you have extreme aspects you will get fovx > 180 degrees, which you don't want.
Here is a good link:
http://wiki.panotools.org/Field_of_View
Note that the aspect ratio is not the same thing as the field of view ratio, and the proper relationship given on this page should be used for relating field of view angles.
Java:
double d = (viewportHeight * 0.5) / Math.tan(Math.toRadians(fovY * 0.5));
fovX = (float) (2 * Math.toDegrees(Math.atan((viewportWidth * 0.5) / d)));
Have you looked at the formulas here?:
http://en.wikipedia.org/wiki/Angle_of_view
Related
I'm having some issues with my camera, where the near plane seems to be too far even when I have it set to 0.1 or lower. It seems like there is already some offset value. So you can't really get close enough an arbitrary object in the scene.
Below is a visual appearance of the bug.
The black triangle shown is the clipping.
Currently I'm using a perspective matrix and here is the code for that.
Matrix4x4 Matrix4x4::Perspective(Float fov, Float aspectRatio, Float near, Float far)
{
Matrix4x4 result(1.0f);
Float q = 1.0f / tan(toRadians(0.5f * fov));
Float a = q / aspectRatio;
Float b = (near + far) / (near - far);
Float c = (2.0f * near * far) / (near - far);
result.elements[0 + 0 * 4] = a;
result.elements[1 + 1 * 4] = q;
result.elements[2 + 2 * 4] = b;
result.elements[2 + 3 * 4] = -1.0f;
result.elements[3 + 2 * 4] = c;
return result;
}
I don't feel that the bug is from the maths class. This is because the maths code are mainly from another project that I've been working on. And works perfectly fine from there.
I also don't suspect that its the way that I render it (forward renderer). I believe my pointers for that are good since I am able to move and rotate the camera via mouse and keyboard.
But! What I suspect is the buffers. The OpenGL buffers. But I'm not entirely sure.
I hope someone gives me some advice as to how I can hunt and tackle this bug down.
Thank you in advance.
Well one image is worth more then 1000 words so:
As I mentioned in my comments if your cube is half size r=1.0 and you set up your camera in distance r from cube center you still get cut by z_near as the cube edges are distant from center in range <sqrt(2),sqrt(3)> so all edges turned towards camera will get cut off ...
But also as mentioned this is just guess because you did not provide any test data relevant for this (matrices and mesh content)...
PS on the right side is the cube side view of your camera settings I am guessing you have set. The green (+/-)Z is your viewing direction.
"glEnable(GL_DEPTH_CLAMP)" Solved The Issue
Rails 4 + Postgres. New to geospatial. Happy to accept solutions that involve RGeo, Geokit, Geocoder, or any other gem that helps solve this issue.
Model contains two fields latitude and longitude.
I have an offset attribute that contains a distance in meters and an orientation attribute that contains one of the 4 cardinal directions (N, E, W, S).
Example:
offset: 525.5 orientation: W
What's a standard way of adding the offset distance to the lat-long position, to give me a new lat-long pair as the result of the distance addition?
For small offsets such as a few hundred metres:
You can handle the N&S orientations knowing that:
R * (lat1-lat2)= NorthSouth distance
where R is the Earth radius (6335km).
You can handle the E&W orientations knowing that:
R * cos(lat)* (lon1-lon2) = EastWest distance.
I'm sorry, I don't speak Ruby, but it should be pretty easy to translate this pseudo-code:
R=6335000 // This is in metres
PI=3.14159265 // Your compiler may have a better constant/macro
if(orientation is North or orientation is South)
x = offset * 180 / (PI * R)
if(orientation is South)
x = -x
endif
newLatitude = latitude + x
else
x = offset * 180 / (PI * R * cos(lat))
if(orientation is West)
x = -x
endif
newLongitude = longitude + x
endif
It took a little bit of digging, and it turns out that there is a singular-ish library function (accounting for curvature, geometric, projection, location & mathematical assumptions) that helps add a distance to a specific surface position.
Function: ST_Project
Used below:
SELECT ST_AsGeoJSON(ST_Project('POINT(longitude latitude)'::geography, offset, radians(orientation)))
It's from PostGIS and therefore useable in Ruby/Rails, although not yet as native Ruby objects (gems haven't wrapped it yet), but as a PostGIS query instead.
offset is in meters. orientation is in degrees ('N'=0, 'E'=90, etc.)
Hopefully, this solution helps others looking for the same thing.
I am trying to optimize the simulation function in my experiment so I can have more artificial brain-controlled agents running at a time. I profiled my code and found out that the big bottleneck in my code right now is computing the relative angle from every agent to every agent, which is O(n2), minus some small optimizations I have done. Here is the current code snippet I have for computing the angle:
[C++]
double calcAngle(double fromX, double fromY, double fromAngle, double toX, double toY)
{
double d = 0.0;
double Ux = 0.0, Uy = 0.0, Vx = 0.0, Vy = 0.0;
d = sqrt( calcDistanceSquared(fromX, fromY, toX, toY) );
Ux = (toX - fromX) / d;
Uy = (toY - fromY) / d;
Vx = cos(fromAngle * (cPI / 180.0));
Vy = sin(fromAngle * (cPI / 180.0));
return atan2(((Ux * Vy) - (Uy * Vx)), ((Ux * Vx) + (Uy * Vy))) * 180.0 / cPI;
}
I have two 2D points (x1, y1) and (x2, y2) and the facing of the "from" point (xa). I want to compute the angle that agent x needs to turn (relative to its current facing) to face agent y.
According to the profiler, the most expensive part is the atan2. I have Googled for hours and the above solution is the best solution I could find. Does anyone know of a more efficient way to compute the angle between two points? I am willing to sacrifice a little accuracy (+/- 1-2 degrees) for speed, if that affects anything.
As has been mentioned in the comments, there are probably high-level approaches to reduce your computational load.
But to the question in hand, you can just use the dot-product relationship:
theta = acos ( a . b / ||a|| ||b|| )
where a and b are your vectors, . denotes "dot product" and || || denotes "vector magnitude".
Essentially, this will replace your {sqrt, cos, sin, atan2} with {sqrt, acos}.
I would also suggest sticking to radians for all internal calculations, only converting to and from degrees for human-readable I/O.
Your comment tells a lot: "I am simulating a 180 degree frontal retina for every agent, so I need the angle". No, you don't. You just need to know whether the angle between the position vector and vision vector is more or less than 90 degrees.
That's very easy: the dot product A·B is >0 if the angle between A and B is less than 90 degrees; 0 if the angle is precisely 90 degrees, and <0 if the angle is more than 90 degrees. Calculating this takes 3 multiplications and 2 additions.
i think it's more a mathematical problem:
try
abs(arctan((y1-yfrom)/(x1-xfrom)) - arctan(1/((y2-yfrom2)/(x2-xfrom2))))
Use the dot product of these two vectors and at worst you need to do an inverse cosine instead:
A = Facing direction. B = Direction of Agent Y from Agent X
Calculating the dot is simple multiplication and addition. From that you have the cosine of the angle.
For starters, you should realize that there are a couple of simplifications that can reduce the calculations a bit:
You need not calculate the angle from an agent to itself,
If you have the angle from agent i to agent j, you already know something about the angle from agent j back to agent i.
I have to ask: what does "agent i turn to face agent j" mean? If the two surfaces are looking right at each other, do you have to do a calculation? What tolerance do you have on "looking right at each other"?
It'd be easier to recommend what to do if you'd stop focusing on the mathematics and describe the problem more fully.
So, I've got an imposter (the real geometry is a cube, possibly clipped, and the imposter geometry is a Menger sponge) and I need to calculate its depth.
I can calculate the amount to offset in world space fairly easily. Unfortunately, I've spent hours failing to perturb the depth with it.
The only correct results I can get are when I go:
gl_FragDepth = gl_FragCoord.z
Basically, I need to know how gl_FragCoord.z is calculated so that I can:
Take the inverse transformation from gl_FragCoord.z to eye space
Add the depth perturbation
Transform this perturbed depth back into the same space as the original gl_FragCoord.z.
I apologize if this seems like a duplicate question; there's a number of other posts here that address similar things. However, after implementing all of them, none work correctly. Rather than trying to pick one to get help with, at this point, I'm asking for complete code that does it. It should just be a few lines.
For future reference, the key code is:
float far=gl_DepthRange.far; float near=gl_DepthRange.near;
vec4 eye_space_pos = gl_ModelViewMatrix * /*something*/
vec4 clip_space_pos = gl_ProjectionMatrix * eye_space_pos;
float ndc_depth = clip_space_pos.z / clip_space_pos.w;
float depth = (((far-near) * ndc_depth) + near + far) / 2.0;
gl_FragDepth = depth;
For another future reference, this is the same formula as given by imallett, which was working for me in an OpenGL 4.0 application:
vec4 v_clip_coord = modelview_projection * vec4(v_position, 1.0);
float f_ndc_depth = v_clip_coord.z / v_clip_coord.w;
gl_FragDepth = (1.0 - 0.0) * 0.5 * f_ndc_depth + (1.0 + 0.0) * 0.5;
Here, modelview_projection is 4x4 modelview-projection matrix and v_position is object-space position of the pixel being rendered (in my case calculated by a raymarcher).
The equation comes from the window coordinates section of this manual. Note that in my code, near is 0.0 and far is 1.0, which are the default values of gl_DepthRange. Note that gl_DepthRange is not the same thing as the near/far distance in the formula for perspective projection matrix! The only trick is using the 0.0 and 1.0 (or gl_DepthRange in case you actually need to change it), I've been struggling for an hour with the other depth range - but that is already "baked" in my (perspective) projection matrix.
Note that this way, the equation really contains just a single multiply by a constant ((far - near) / 2) and a single addition of another constant ((far + near) / 2). Compare that to multiply, add and divide (possibly converted to a multiply by an optimizing compiler) that is required in the code of imallett.
I need to get an up vector for a camera (to get the right look) from a roll, pitch, and yaw angles (in degrees). I've been trying different things for a couple hours and have had no luck :(. Any help here would be appreciated!
Roll, Pitch and Yaw define a rotation in 3 axis. from these angles you can construct a 3x3 transformation matrix which express this rotation (see here how)
After you have this matrix you take your regular up vector, say (0,1,0) if 'up' is the Y axis and multiply it with the matrix. What you'll get is the transformed up vector.
Edit-
Applying the transformation to (0,1,0) is the same thing as taking the middle row. The 3 rows of the matrix make up an orthogonal base of the rotated system. Mind you that a 3D graphic API uses 4x4 matrices. So to make a 4x4 matrix out of the 3x3 rotation matrix you need to add a '1' at M[3][3] (the corner) and zeros at the rest like so:
r r r 0
r r r 0
r r r 0
0 0 0 1
This may not directly answer your question, although it may still help. I have a free open-source project for XNA that creates a debug terminal that overlays your game while it is running. You can use this for looking up values, invoking methods, or whatever. So if you have a transformation matrix and you wanted to extract various parts of it while the game is running, you can do that. The project can be found here:
http://www.protohacks.net/xna_debug_terminal
I don't have much expertise in the kind of math you are using, but hopefully Shoosh's post helps on that. Maybe the debug terminal can help you when trying out his idea or in any other future problems you encounter.
12 years later...
In case anyone is still interested in the answer to this question, here is the solution (even tough its in Java it should be pretty easy to translate it in other languages):
private Vector3f getRayFromCamera() {
float rx = (float)Math.sin((double)getYaw() * (double)(Math.PI / 180)) * -1 * (1-Math.abs((float)Math.cos((double)getPitch() * (double)(Math.PI / 180) - 90 * (Math.PI / 180)) * -1));
float ry = (float)Math.cos((double)getPitch() * (double)(Math.PI / 180) - 90 * (Math.PI / 180)) * -1;
float rz = (float)Math.cos((double)getYaw() * (double)(Math.PI / 180)) * -1 * (1- Math.abs((float)Math.cos((double)getPitch() * (double)(Math.PI / 180) - 90 * (Math.PI / 180)) * -1));
return new Vector3f(rx, ry, rz);
}
Note: This calculates the Front Vector but when multiplying with the vector (0,1,0) you can change that!