Packing the normal vector and tangent vector - opengl

In the deferred shading engine I'm working on, I currently store the normal vector in a buffer with the internal format GL_RGBA16F.
I was always aware that this could not be the best solution, but I had no time to deal with it.
Recently I read "Survey of Efficient Representations for Independent Unit Vectors", which inspired me to use Octahedral Normal Vectors (ONV) and to change the buffer to GL_RG16_SNORM:
Encode the normal vector (vec3 to vec2):
// Returns +/- 1
vec2 signNotZero( vec2 v )
{
return vec2((v.x >= 0.0) ? +1.0 : -1.0, (v.y >= 0.0) ? +1.0 : -1.0);
}
// Assume normalized input. Output is on [-1, 1] for each component.
vec2 float32x3_to_oct( in vec3 v )
{
// Project the sphere onto the octahedron, and then onto the xy plane
vec2 p = v.xy * (1.0 / (abs(v.x) + abs(v.y) + abs(v.z)));
// Reflect the folds of the lower hemisphere over the diagonals
return (v.z <= 0.0) ? ((1.0 - abs(p.yx)) * signNotZero(p)) : p;
}
Decode the normal vector (vec2 to vec3):
vec3 oct_to_float32x3( vec2 e )
{
vec3 v = vec3(e.xy, 1.0 - abs(e.x) - abs(e.y));
if (v.z < 0) v.xy = (1.0 - abs(v.yx)) * signNotZero(v.xy);
return normalize(v);
}
Since I have implemented an anisotropic light model right now, it is necessary to store the tangent vector as well as the normal vector. I want to store both vectors in one and the same color attachment of the frame buffer. That brings me to my question. What is a efficient compromise to pack a unit normal vector and tangent vector in a buffer?
Of course it would be easy with the algorithms from the paper to store the normal vector in the RG channels and the tangent vector in the BA channels of a GL_RGBA16_SNORM buffer, and this is my current implementation too.
But since the normal vector an the tangent vector are always orthogonal, there must be more elegant way, which either increases accuracy or saves memory.
So the real question is: How can I take advantage of the fact that I know that 2 vectors are orthogonal? Can I store both vectors in an GL_RGB16_SNORM buffer and if not can I improve the accuracy when I pack them to a GL_RGBA16_SNORM buffer.

The following considerations are purely mathematical and I have no experience with their practicality. However, I think that especially Option 2 might be a viable candidate.
Both of the following options have in common how they state the problem: Given a normal (that you can reconstruct using ONV), how can one encode the tangent with a single number.
Option 1
The first option is very close to what meowgoesthedog suggested. Define an arbitrary reference vector (e.g. (0, 0, 1)). Then encode the tangent as the angle (normalized to the [-1, 1] range) that you need to rotate this vector about the normal to match the tangent direction (after projecting on the tangent plane, of course). You will need two different reference vectors (or even three) and choose the correct one depending on the normal. You don't want the reference vector to be parallel to the normal. I assume that this is computationally more expensive than the second option but that would need measuring. But you would get a uniform error distribution in return.
Option 2
Let's consider the plane orthogonal to the tangent. This plane can be defined either by the tangent or by two vectors that lie in the plane. We know one vector: the surface normal. If we know a second vector v, we can calculate the tangent as t = normalize(cross(normal, v)). To encode this vector, we can prescribe two components and solve for the remaining one. E.g. let our vector be (1, 1, x). Then, to encode the vector, we need to find x, such that cross((1, 1, x), normal) is parallel to the tangent. This can be done with some simple arithmetic. Again, you would need a few different vector templates to account for all scenarios. In the end, you have a scheme whose encoder is more complex but whose decoder couldn't be simpler. The error distribution will not be as uniform as in Option 1, but should be ok for a reasonable choice of vector templates.

Related

Project a 3D vertex to screen coordinates independently from OpenGL?

I have a vertex (x, y, z) and I want to calculate the screen location where this point would be rendered on my viewport. Something like Ray Picking, just more or less the other way around. I don't think I can use gluProject because at the time I need the projected point my matrices are restored to identities.
I would like to stay independent from OpenGL, so no extra render pass. This way I'm sure it would only be some math like the ray picking thing. I've implemented that one and it works well, so I want to project a vertex the same way.
Of course I have camera pos, up and lookAt vectors and fovy. Is there any source of information about this? Or does anyone know how to work this out?
If your know your matrices (or at least know how to construct them), you can compute screen location for a vertex by multiplying its position with the matrices and then performing viewport transformation:
vProjected = modelViewPojectionMatrix * v;
if (
// check that vertex shouldn't be clipped.
-vProjected.w <= vProjected.x && vProjected.x <= vProjected.w &&
-vProjected.w <= vProjected.y && vProjected.y <= vProjected.w &&
-vProjected.w <= vProjected.z && vProjected.z <= vProjected.w
) {
vProjected /= vProjected.w;
vScreen.x = VIEWPORT_W * vProjected.x / 2 + VIEWPORT_CENTER_X;
vScreen.y = VIEWPORT_H * vProjected.y / 2 + VIEWPORT_CENTER_Y;
}
Note that, as per OpenGL convention, (0, 0) is lower left corner, not upper left one.
Any math library with verctor and matrix operations can help you with that. For example, mathfu or glm.
UPD. How you can construct modelViewProjectionMatrix given camera position and orientation and projection params? We need two matrices (let's assume that model matrix is just an identity, i.e. vertex positions a given already in world coordinate system). First one would be the view matrix, which takes into account camera position and orientation. Here I'll be using mathfu since I'm more familiar with it, but almost every math library design with 3D graphics in mind has the same functions:
viewMatrix = mathfu::mat4::LookAt(
cameraLookAtPosition,
cameraPosition,
cameraUpVector
);
The second one would be projection matrix:
projectionMatrix = mathfu::mat4::Perspective(fovy, aspect, zNear, zFar);
Now modelViewProjectionMatrix is just a product of those two:
modelViewProjectionMatrix = projectionMatrix * viewMatrix;
Note that matrix multiplication is not commutative, in other words A * B != B * A. So order in which matrices are multiplied is important.

Getting the Tangent for a Object Space to Texture Space

A university assignment requires me to use the Vertex Coordinates I have to calculate the Normals and the Tangent from the Normal values so that I can create a Object Space to Texture Space Matrix.
I have the code needed to make the Matrix, and the binormal but I don't have the code for calculating the Tangent. I tried to look online, but the answers usually confuse me. Can you explain to me clearly how it works?
EDIT: I have corrected what I wrote previously as clearly I misunderstood the assignment. Thank you everyone for helping me see that.
A tangent in the mathematical sense is a property of a geometric object, not of the normalmap. In case of normalmapping, we are in addition searching for a very specific tangent (there are infinitely many in each point, basically every vector in the plane defined by the normal is a tangent).
But let's go one step back: We want a space where the u-direction of the texture is mapped on the tangent direction, the v-direction on the bitangent/binormal and the up-vector of the normalmap to the normal of the object. Thus the tangent for a triangle (v0, v1, v2) with uv-coordinates (uv1, uv2, uv3) can be calculated as:
dv1 = v1-v0
dv2 = v2-v0
duv1 = uv1-uv0
duv2 = uv2-uv0
r = 1.0f / (duv1.x * duv2.y - duv1.y * duv2.x);
tangent = (dv1 * duv2.y - dv2 * duv1.y) * r;
bitangent = (dv2 * duv1.x - dv1 * duv2.x) * r;
When having this done for all triangles, we have to smooth the tangents at shared vertices (quite similar to what happens with the normal). There are several algorithms for doing this, depending on what you need. One can, for example, weight the tangents by the surface area of the adjacent triangles or by the incident angle of them.
An implementation of this whole calculation can be found [here] along a more detailed explaination: (http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-13-normal-mapping/)

C++ opengl get new vertex position after glTranslatef

I have a plane in my 3d space and I want to move it somewhere else, so I use glTranslate to do so.
The planes vertex data is: (0,0,0), (1,0,0), (1,1,0) and (0,1,0).
I translate the object to the position of (2,0,0) through the use of glTranslatef(2.0, 0.0, 0.0).
After the translation the point data is unchanged so if I was to want to collide with my plane the visual position is not its actual position.
Is there a way to get the point data from the MODELVIEW_MATRIX or at least a way to find out what the new values are after the glTranslate?
Don't respond with just add 2.0 to the actual values to move it because what if I want to the use glRotate etc. I still want the points locations.
If you really don't want to maintain your own transformation matrix, you can get the current modelview matrix with:
GLfloat mat[16];
glGetFloatv(GL_MODELVIEW_MATRIX, mat);
You can then apply this matrix to your vertices with a standard matrix multiplication. Keep in mind that the matrix is arranged in column-major order. With an input vector xIn, the transformed vector xOut is:
xOut[0] = mat[0] * xIn[0] + mat[4] * xIn[1] + mat[8] * xIn[2] + mat[12];
xOut[1] = mat[1] * xIn[0] + mat[5] * xIn[1] + mat[9] * xIn[2] + mat[13];
xOut[2] = mat[2] * xIn[0] + mat[6] * xIn[1] + mat[10] * xIn[2] + mat[14];
Keeping track of the current transformation matrix in your own code is really a better approach, IMHO. Aside from eliminating glGet() calls, which can be harmful to performance, it gets you on a path to using modern OpenGL (Core Profile), where the matrix stack and all related calls do not exist anymore.
You can create a matrix from your translation and rotation, so that you can use the matrix to transform the coordinates.
There're many libraries to help you create such matrix and transform coordinates.

Calculate clipspace.w from clipspace.xyz and (inv) projection matrix

I'm using a logarithmic depth algorithmic which results in someFunc(clipspace.z) being written to the depth buffer and no implicit perspective divide.
I'm doing RTT / postprocessing so later on in a fragment shader I want to recompute eyespace.xyz, given ndc.xy (from the fragment coordinates) and clipspace.z (from someFuncInv() on the value stored in the depth buffer).
Note that I do not have clipspace.w, and my stored value is not clipspace.z / clipspace.w (as it would be when using fixed function depth) - so something along the lines of ...
float clip_z = ...; /* [-1 .. +1] */
vec2 ndc = vec2(FragCoord.xy / viewport * 2.0 - 1.0);
vec4 clipspace = InvProjMatrix * vec4(ndc, clip_z, 1.0));
clipspace /= clipspace.w;
... does not work here.
So is there a way to calculate clipspace.w out of clipspace.xyz, given the projection matrix or it's inverse?
clipspace.xy = FragCoord.xy / viewport * 2.0 - 1.0;
This is wrong in terms of nomenclature. "Clip space" is the space that the vertex shader (or whatever the last Vertex Processing stage is) outputs. Between clip space and window space is normalized device coordinate (NDC) space. NDC space is clip space divided by the clip space W coordinate:
vec3 ndcspace = clipspace.xyz / clipspace.w;
So the first step is to take our window space coordinates and get NDC space coordinates. Which is easy:
vec3 ndcspace = vec3(FragCoord.xy / viewport * 2.0 - 1.0, depth);
Now, I'm going to assume that your depth value is the proper NDC-space depth. I'm assuming that you fetch the value from a depth texture, then used the depth range near/far values it was rendered with to map it into a [-1, 1] range. If you didn't, you should.
So, now that we have ndcspace, how do we compute clipspace? Well, that's obvious:
vec4 clipspace = vec4(ndcspace * clipspace.w, clipspace.w);
Obvious and... not helpful, since we don't have clipspace.w. So how do we get it?
To get this, we need to look at how clipspace was computed the first time:
vec4 clipspace = Proj * cameraspace;
This means that clipspace.w is computed by taking cameraspace and dot-producting it by the fourth row of Proj.
Well, that's not very helpful. It gets more helpful if we actually look at the fourth row of Proj. Granted, you could be using any projection matrix, and if you're not using the typical projection matrix, this computation becomes more difficult (potentially impossible).
The fourth row of Proj, using the typical projection matrix, is really just this:
[0, 0, -1, 0]
This means that the clipspace.w is really just -cameraspace.z. How does that help us?
It helps by remembering this:
ndcspace.z = clipspace.z / clipspace.w;
ndcspace.z = clipspace.z / -cameraspace.z;
Well, that's nice, but it just trades one unknown for another; we still have an equation with two unknowns (clipspace.z and cameraspace.z). However, we do know something else: clipspace.z comes from dot-producting cameraspace with the third row of our projection matrix. The traditional projection matrix's third row looks like this:
[0, 0, T1, T2]
Where T1 and T2 are non-zero numbers. We'll ignore what these numbers are for the time being. Therefore, clipspace.z is really just T1 * cameraspace.z + T2 * cameraspace.w. And if we know cameraspace.w is 1.0 (as it usually is), then we can remove it:
ndcspace.z = (T1 * cameraspace.z + T2) / -cameraspace.z;
So, we still have a problem. Actually, we don't. Why? Because there is only one unknown in this euqation. Remember: we already know ndcspace.z. We can therefore use ndcspace.z to compute cameraspace.z:
ndcspace.z = -T1 + (-T2 / cameraspace.z);
ndcspace.z + T1 = -T2 / cameraspace.z;
cameraspace.z = -T2 / (ndcspace.z + T1);
T1 and T2 come right out of our projection matrix (the one the scene was originally rendered with). And we already have ndcspace.z. So we can compute cameraspace.z. And we know that:
clispace.w = -cameraspace.z;
Therefore, we can do this:
vec4 clipspace = vec4(ndcspace * clipspace.w, clipspace.w);
Obviously you'll need a float for clipspace.w rather than the literal code, but you get my point. Once you have clipspace, to get camera space, you multiply by the inverse projection matrix:
vec4 cameraspace = InvProj * clipspace;

Tangent of a parametric discrete curve

I have a parametric curve, say two vectors of doubles where the parameter is the index, and I have to calculate the angle of the tangent to this curve at any given point (index).
Any suggestion or link about how to do that?
Thanks.
Here's a short formula, equivalent (I think) to pau.estalella's answer:
m[i] = (y[i+1] - y[i-1]) / (x[i+1] - x[i-1])
this approximates, reasonably well, the slope at the point (x[i], y[i]).
Your question mentions the "angle of the tangent". The tangent line, having slope m[i], makes angle arctangent(m[i]) with the positive x axis. If this is what you're after, you might use the two-argument arctangent, if it's available:
angle[i] = atan2(y[i+1] - y[i-1], x[i+1] - x[i-1])
this will work correctly, even when x[i+1] == x[i-1].
I suggest you check out the Wikipedia article on numerical differentiation for a start. Before you go much further than that, decide what purposes you want the tangent for and decide whether or not you need to try more complex schemes than the simple ones in the article.
The first problem you run into is to even define the tangent in one of the vertexes of the curve. Consider e.g. that you have the two arrays:
x = { 1.0, 2.0, 2.0 };
y = { 1.0, 1.0, 2.0 };
Then at the second vertex you have a 90-degree change of direction of the line. In that place the tangent isn't even defined mathematically.
Answer to gregseth's comment below
I guess in your example the "tangent" at the second point would be the line parallel to (P0,P2) passing through P1... which kind of give me the answer : for any point of index N the parallel to (N-1, N+1) passing through N. Would that be a not-too-bad approximation?
It depends on what you are using it for. Consider for example:
x = { 1.0, 2.0, 2.0 };
y = { 1.0, 1000000, 1000000 };
That is basically an L shape with a very high vertical line. In your suggestion it would give you an almost vertical tangent. Is that what you want, or do you rather want a 45-degree tangent in that case? It also depends on your input data how you sould define it.
One solution is get the two vectors connection to the vertex, normalize them and then use your algorithm. That way you would get a 45-degree tangent in the above example.
Compute the first derivative: dy/dx. That gives you the tangent.
The tangent to a smooth curve at a point P is the parametric straight line P + tV, where V is the derivative of the curve with respect to "the parameter". But here the parameter is just the index of an array, and numerical differentiation is a difficult problem, hence to approximate the tangent I would use (weighted) least squares approximation.
In other words, choose three or five points of the curve around your point of interest P (i.e. P[i-2], P[i-1], P[i], P[i+1], and P[i+2], if P==P[i]), and approximate them with a straight line, in the least squares sense. The more weight you assign to the middle point P, the more close the line will be to P; on the other hand, the more weight you assign to the extremal points, the more "tangent" the straight line will be, that is, the more nicely it will approximate you curve in the neighborhood of P.
For example, with respect to the following points:
x = [-1, 0, 1]
y = [ 0, 1, 0]
for which the tangent is not defined (as in Anders Abel's answer),
this approach should yield a horizontal straight line close to the point (0,1).
You can try to compute the tangent of an interpolating curve that passes through the given points (I'm thinking of a cubic spline, which is pretty easy to derive) or compute the tangent directly from the data points.
You can find a rough approximation of the derivative in the following manner
Let a curve C pass through points p1,p2 and p3. At point p2 you have two possible tangents: t1=p2-p1 and t2=p3-p2. You can combine them by simply computing their average: 0.5*(t1+t2)
or you can combine them according to their lengths (or their reciprocal 1/length)
Remember to normalize the resulting tangent.
In order to compute the angle between the tangent and the curve, remember that the dot product of two unit vectors gives the cosine of the angle between them. Take the resulting tangent t and the unit vector v2=|p3-p2|, and acos(dot(t,v2)) gives the angle you need.