Is it possible to calculate my mesh normal vector when I have just TANGENT and BINORMAL vectors ?
float4 Binormal : BINORMAL ;
float4 Tangent : TANGENT ;
float4 Position : POSITION ;
As far as I understand it, a binormal vector is defined from the normal and tangent vectors through a cross product :
Thus normal = binormal x tangent, that is, what you wrote is correct.
Since according to the doc, the cross product is defined for vectors of size 3, you can do the following :
normal = float4(cross(binormal.xyz, tangent.xyz), 1.0);
This is using the cross product from HLSL, which I recommend. But to get into more detail, you are not actually performing a real cross product.
The real formula should be the following, where u is binormal, v is tangent and s is normal :
Thus the code for a cross product should, instead, be :
normal.x = binormal.y*tangent.z - binormal.z*tangent.y;
normal.y = binormal.z*tangent.x - binormal.x*tangent.z;
normal.z = binormal.x*tangent.y - binormal.y*tangent.x;
And an alternate, swizzled version (that returns a vector of size 3, use float4(..., 1.0) if you want a 4 item vector) :
normal = binormal.yzx*tangent.zxy - binormal.zxy*tangent.yzx;
Related
In the deferred shading engine I'm working on, I currently store the normal vector in a buffer with the internal format GL_RGBA16F.
I was always aware that this could not be the best solution, but I had no time to deal with it.
Recently I read "Survey of Efficient Representations for Independent Unit Vectors", which inspired me to use Octahedral Normal Vectors (ONV) and to change the buffer to GL_RG16_SNORM:
Encode the normal vector (vec3 to vec2):
// Returns +/- 1
vec2 signNotZero( vec2 v )
{
return vec2((v.x >= 0.0) ? +1.0 : -1.0, (v.y >= 0.0) ? +1.0 : -1.0);
}
// Assume normalized input. Output is on [-1, 1] for each component.
vec2 float32x3_to_oct( in vec3 v )
{
// Project the sphere onto the octahedron, and then onto the xy plane
vec2 p = v.xy * (1.0 / (abs(v.x) + abs(v.y) + abs(v.z)));
// Reflect the folds of the lower hemisphere over the diagonals
return (v.z <= 0.0) ? ((1.0 - abs(p.yx)) * signNotZero(p)) : p;
}
Decode the normal vector (vec2 to vec3):
vec3 oct_to_float32x3( vec2 e )
{
vec3 v = vec3(e.xy, 1.0 - abs(e.x) - abs(e.y));
if (v.z < 0) v.xy = (1.0 - abs(v.yx)) * signNotZero(v.xy);
return normalize(v);
}
Since I have implemented an anisotropic light model right now, it is necessary to store the tangent vector as well as the normal vector. I want to store both vectors in one and the same color attachment of the frame buffer. That brings me to my question. What is a efficient compromise to pack a unit normal vector and tangent vector in a buffer?
Of course it would be easy with the algorithms from the paper to store the normal vector in the RG channels and the tangent vector in the BA channels of a GL_RGBA16_SNORM buffer, and this is my current implementation too.
But since the normal vector an the tangent vector are always orthogonal, there must be more elegant way, which either increases accuracy or saves memory.
So the real question is: How can I take advantage of the fact that I know that 2 vectors are orthogonal? Can I store both vectors in an GL_RGB16_SNORM buffer and if not can I improve the accuracy when I pack them to a GL_RGBA16_SNORM buffer.
The following considerations are purely mathematical and I have no experience with their practicality. However, I think that especially Option 2 might be a viable candidate.
Both of the following options have in common how they state the problem: Given a normal (that you can reconstruct using ONV), how can one encode the tangent with a single number.
Option 1
The first option is very close to what meowgoesthedog suggested. Define an arbitrary reference vector (e.g. (0, 0, 1)). Then encode the tangent as the angle (normalized to the [-1, 1] range) that you need to rotate this vector about the normal to match the tangent direction (after projecting on the tangent plane, of course). You will need two different reference vectors (or even three) and choose the correct one depending on the normal. You don't want the reference vector to be parallel to the normal. I assume that this is computationally more expensive than the second option but that would need measuring. But you would get a uniform error distribution in return.
Option 2
Let's consider the plane orthogonal to the tangent. This plane can be defined either by the tangent or by two vectors that lie in the plane. We know one vector: the surface normal. If we know a second vector v, we can calculate the tangent as t = normalize(cross(normal, v)). To encode this vector, we can prescribe two components and solve for the remaining one. E.g. let our vector be (1, 1, x). Then, to encode the vector, we need to find x, such that cross((1, 1, x), normal) is parallel to the tangent. This can be done with some simple arithmetic. Again, you would need a few different vector templates to account for all scenarios. In the end, you have a scheme whose encoder is more complex but whose decoder couldn't be simpler. The error distribution will not be as uniform as in Option 1, but should be ok for a reasonable choice of vector templates.
A university assignment requires me to use the Vertex Coordinates I have to calculate the Normals and the Tangent from the Normal values so that I can create a Object Space to Texture Space Matrix.
I have the code needed to make the Matrix, and the binormal but I don't have the code for calculating the Tangent. I tried to look online, but the answers usually confuse me. Can you explain to me clearly how it works?
EDIT: I have corrected what I wrote previously as clearly I misunderstood the assignment. Thank you everyone for helping me see that.
A tangent in the mathematical sense is a property of a geometric object, not of the normalmap. In case of normalmapping, we are in addition searching for a very specific tangent (there are infinitely many in each point, basically every vector in the plane defined by the normal is a tangent).
But let's go one step back: We want a space where the u-direction of the texture is mapped on the tangent direction, the v-direction on the bitangent/binormal and the up-vector of the normalmap to the normal of the object. Thus the tangent for a triangle (v0, v1, v2) with uv-coordinates (uv1, uv2, uv3) can be calculated as:
dv1 = v1-v0
dv2 = v2-v0
duv1 = uv1-uv0
duv2 = uv2-uv0
r = 1.0f / (duv1.x * duv2.y - duv1.y * duv2.x);
tangent = (dv1 * duv2.y - dv2 * duv1.y) * r;
bitangent = (dv2 * duv1.x - dv1 * duv2.x) * r;
When having this done for all triangles, we have to smooth the tangents at shared vertices (quite similar to what happens with the normal). There are several algorithms for doing this, depending on what you need. One can, for example, weight the tangents by the surface area of the adjacent triangles or by the incident angle of them.
An implementation of this whole calculation can be found [here] along a more detailed explaination: (http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-13-normal-mapping/)
I'm working on a toy raytracer using vertex based triangles, similar to OpenGL. Each vertex has its own color and the coloring of a triangle at each point should be based on a weighted average of the colors of the vertex, weighted by how close the point is to each vertex.
I can't figure out how to calculate the weight of each color at a given point on the triangle to mimic the color shading done by OpenGL, as shown by many examples here. I have several thoughts, but I'm not sure which one is correct (V is a vertex, U and W are the other two vertices, P is the point to color, C is the centroid of the triangle, and |PQ| is the distance form point P to point Q):
Have weight equal to `1-(|VP|/|VC|), but this would leave black at the centroid (all colors are weighted 0), which is not correct.
Weight is equal to 1-(|VP|/max(|VU|,|VW|)), so V has non-zero weight at the closer of the two vertices, which I don't think is correct.
Weight is equal to 1-(|VP|/min(|VU|,|VW|)), so V has zero weight at the closer of the two vertices, and negative weight (which would saturate to 0) at the further of the two. I'm not sure if this is right or not.
Line segment L extends from V through P to the opposite side of the triangle (UW): weight is the ratio of |VP| to |L|. So the weight of V would be 0 all along the opposite side.
The last one seems like the most likely, but I'm having trouble implementing it so I'm not sure if its correct.
OpenGL uses Barycentric coordinates (linear interpolation precisely although you can change that using interpolation functions or qualifiers such as centroid or noperspective in latest versions).
In case you don't know, barycentric coordinates works like that:
For a location P in a triangle made of vertices V1, V2 and V3 whose respective coefficients are C1, C2, C3 such as C1+C2+C3=1 (those coefficients refers to the influence of each vertex in the color of P) OpenGL must calculate those such as the result is equivalent to
C1 = (AreaOfTriangle PV2V3) / (AreaOfTriangle V1V2V3)
C2 = (AreaOfTriangle PV3V1) / (AreaOfTriangle V1V2V3)
C3 = (AreaOfTriangle PV1V2) / (AreaOfTriangle V1V2V3)
and the area of a triangle can be calculated with half the length of the cross product of two vector defining it (in direct sens) for example AreaOfTriangle V1V2V3 = length(cross(V2-V1, V3-V1)) / 2 We then have something like:
float areaOfTriangle = length(cross(V2-V1, V3-V1)); //Two times the area of the triangle
float C1 = length(cross(V2-P, V3-P)) / areaOfTriangle; //Because A1*2/A*2 = A1/A
float C2 = length(cross(V3-P, V1-P)) / areaOfTriangle; //Because A2*2/A*2 = A2/A
float C3 = 1.0f - C1 - C2; //Because C1 + C2 + C3 = 1
But after some math (and little bit of web research :D), the most efficient way of doing this I found was:
YOURVECTYPE sideVec1 = V2 - V1, sideVec2 = V3 - V1, sideVec3 = P - V1;
float dot11 = dot(sideVec1, sideVec1);
float dot12 = dot(sideVec1, sideVec2);
float dot22 = dot(sideVec2, sideVec2);
float dot31 = dot(sideVec3, sideVec1);
float dot32 = dot(sideVec3, sideVec2);
float denom = dot11 * dot22 - dot12 * dot12;
float C1 = (dot22 * dot31 - dot12 * dot32) / denom;
float C2 = (dot11 * dot32 - dot12 * dot31) / denom;
float C3 = 1.0f - C1 - C2;
Then, to interpolate things like colors, color1, color2 and color3 being the colors of your vertices, you do:
float color = C1*color1 + C2*color2 + C3*color3;
But beware that this doesn't work properly if you're using perspective transformations (or any transformation of vertices implying the w component) so in this case, you'll have to use:
float color = (C1*color1/w1 + C2*color2/w2 + C3*color3/w3)/(C1/w1 + C2/w2 + C3/w3);
w1, w2, and w3 are respectively the fourth components of the original vertices that made V1, V2 and V3.
V1, V2 and V3 in the first calculation must be 3 dimensional because of the cross product but in the second one (the most efficient), it can be 2 dimensional as well as 3 dimensional, the results will be the same (I think you guessed that 2D was faster in the second calculation) but in both case, don't forget to divide them by the fourth component of their original vector if you're doing perspective transformations and to use the second formula for interpolation in that case. (And in case you didn't understand, all vectors in those calculations should NOT include a fourth component!)
And one last thing; I strongly advise you to use OpenGL just by rendering a big quad on the screen and putting all your code in the shaders (Although you'll need very strong knowledge about OpenGL for advanced use) because you'll benefit from parallelism (even from a s#!+ video card) except if you're writing that on a 30years-old computer or if you're just doing that to see how it works.
IIRC, for this you don't really need to do anything in GLSL -- the interpolated color will already be the input color to your fragment shader if you just pass on the vertex color in the vertex shader.
Edit: Yes, this doesnt answer the question -- the correct answer is in the first comment above already: Use Barycentric coordinates (which is what GL does).
I have a situation in GLSL where I need to calculate the divergence of a vector in fragment shader
vec3 posVector;
Divergence is mathematically given by
It's a dot product between vector and Gradient.
Does anyone how to compute this ?
The divergence of the position vector is the the divergence of the identity vector field
F: ℝ³ -> ℝ³
F(r_) = r_
and div of that is both const and known:
div(r_) = 3.
I'm using a logarithmic depth algorithmic which results in someFunc(clipspace.z) being written to the depth buffer and no implicit perspective divide.
I'm doing RTT / postprocessing so later on in a fragment shader I want to recompute eyespace.xyz, given ndc.xy (from the fragment coordinates) and clipspace.z (from someFuncInv() on the value stored in the depth buffer).
Note that I do not have clipspace.w, and my stored value is not clipspace.z / clipspace.w (as it would be when using fixed function depth) - so something along the lines of ...
float clip_z = ...; /* [-1 .. +1] */
vec2 ndc = vec2(FragCoord.xy / viewport * 2.0 - 1.0);
vec4 clipspace = InvProjMatrix * vec4(ndc, clip_z, 1.0));
clipspace /= clipspace.w;
... does not work here.
So is there a way to calculate clipspace.w out of clipspace.xyz, given the projection matrix or it's inverse?
clipspace.xy = FragCoord.xy / viewport * 2.0 - 1.0;
This is wrong in terms of nomenclature. "Clip space" is the space that the vertex shader (or whatever the last Vertex Processing stage is) outputs. Between clip space and window space is normalized device coordinate (NDC) space. NDC space is clip space divided by the clip space W coordinate:
vec3 ndcspace = clipspace.xyz / clipspace.w;
So the first step is to take our window space coordinates and get NDC space coordinates. Which is easy:
vec3 ndcspace = vec3(FragCoord.xy / viewport * 2.0 - 1.0, depth);
Now, I'm going to assume that your depth value is the proper NDC-space depth. I'm assuming that you fetch the value from a depth texture, then used the depth range near/far values it was rendered with to map it into a [-1, 1] range. If you didn't, you should.
So, now that we have ndcspace, how do we compute clipspace? Well, that's obvious:
vec4 clipspace = vec4(ndcspace * clipspace.w, clipspace.w);
Obvious and... not helpful, since we don't have clipspace.w. So how do we get it?
To get this, we need to look at how clipspace was computed the first time:
vec4 clipspace = Proj * cameraspace;
This means that clipspace.w is computed by taking cameraspace and dot-producting it by the fourth row of Proj.
Well, that's not very helpful. It gets more helpful if we actually look at the fourth row of Proj. Granted, you could be using any projection matrix, and if you're not using the typical projection matrix, this computation becomes more difficult (potentially impossible).
The fourth row of Proj, using the typical projection matrix, is really just this:
[0, 0, -1, 0]
This means that the clipspace.w is really just -cameraspace.z. How does that help us?
It helps by remembering this:
ndcspace.z = clipspace.z / clipspace.w;
ndcspace.z = clipspace.z / -cameraspace.z;
Well, that's nice, but it just trades one unknown for another; we still have an equation with two unknowns (clipspace.z and cameraspace.z). However, we do know something else: clipspace.z comes from dot-producting cameraspace with the third row of our projection matrix. The traditional projection matrix's third row looks like this:
[0, 0, T1, T2]
Where T1 and T2 are non-zero numbers. We'll ignore what these numbers are for the time being. Therefore, clipspace.z is really just T1 * cameraspace.z + T2 * cameraspace.w. And if we know cameraspace.w is 1.0 (as it usually is), then we can remove it:
ndcspace.z = (T1 * cameraspace.z + T2) / -cameraspace.z;
So, we still have a problem. Actually, we don't. Why? Because there is only one unknown in this euqation. Remember: we already know ndcspace.z. We can therefore use ndcspace.z to compute cameraspace.z:
ndcspace.z = -T1 + (-T2 / cameraspace.z);
ndcspace.z + T1 = -T2 / cameraspace.z;
cameraspace.z = -T2 / (ndcspace.z + T1);
T1 and T2 come right out of our projection matrix (the one the scene was originally rendered with). And we already have ndcspace.z. So we can compute cameraspace.z. And we know that:
clispace.w = -cameraspace.z;
Therefore, we can do this:
vec4 clipspace = vec4(ndcspace * clipspace.w, clipspace.w);
Obviously you'll need a float for clipspace.w rather than the literal code, but you get my point. Once you have clipspace, to get camera space, you multiply by the inverse projection matrix:
vec4 cameraspace = InvProj * clipspace;