I am trying to calculate a normal map from subdivisions of a mesh, there are 2 meshes a UV unwrapped base mesh which contains quads and triangles and a subdivision mesh that contains only quads.
Suppose a I have a quad with all the coordinates of the vertices both in object space and UV space (quad is not flat), the quad's face normal and a pixel with it's position in UV space.
Can I calculate the TBN matrix for the given quad and write colors to the pixel, if so then is it different for quads?
I ask this because I couldn't find any examples for calculating a TBN matrix for quads, only triangles ?
Before answering your question, let me start by explaining what the tangents and bitangents that you need actually are.
Let's forget about triangles, quads, or polygons for a minute. We just have a surface (given in whatever representation) and a parameterization in form of texture coordinates that are defined at every point on the surface. We could then define the surface as: xyz = s(uv). uv are some 2D texture coordinates and the function s turns these texture coordinates into 3D world positions xyz. Now, the tangent is the direction in which the u-coordinate increases. I.e., it is the derivative of the 3D position with respect to the u-coordinate: T = d s(uv) / du. Similarly, the bitangent is the derivative with respect to the v-coordinate. The normal is a vector that is perpendicular to both of them and usually points outwards. Remember that the three vectors are usually different at every point on the surface.
Now let's go over to discrete computer graphics where we approximate our continuous surface s with a polygon mesh. The problem is that there is no way to get the exact tangents and bitangents anymore. We just lost to much information in our discrete approximation. So, there are three common ways how we can approximate the tangents anyway:
Store the vectors with the model (this is usually not done).
Estimate the vectors at the vertices and interpolate them in the faces.
Calculate the vectors for each face separately. This will give you a discontinuous tangent space, which produces artifacts when the dihedral angle between two neighboring faces is too big. Still, this is apparently what most people are doing. And it is apparently also what you want to do.
Let's focus on the third method. For triangles, this is especially simple because the texture coordinates are interpolated linearly (barycentric interpolation) across the triangle. Hence, the derivatives are all constant (it's just a linear function). This is why you can calculate tangents/bitangents per triangle.
For quads, this is not so simple. First, you must agree on a way to interpolate positions and texture coordinates from the vertices of the quad to its inside. Oftentimes, bilinear interpolation is used. However, this is not a linear interpolation, i.e. the tangents and bitangents will not be constant anymore. This will only happen in special cases (if the quad is planar and the quad in uv space is a parallelogram). In general, these assumptions do not hold and you end up with different tangents/bitangents/normals for every point on the quad.
One way to calculate the required derivatives is by introducing an auxiliary coordinate system. Let's define a coordinate system st, where the first corner of the quad has coordinates (0, 0) and the diagonally opposite corner has (1, 1) (the other corners have (0, 1) and (1, 0)). These are actually our interpolation coordinates. Therefore, given an arbitrary interpolation scheme, it is relatively simple to calculate the derivatives d xyz / d st and d uv / d st. The first one will be a 3x2 matrix and the second one will be a 2x2 matrix (these matrices are called Jacobians of the interpolation). Then, given these matrices, you can calculate:
d xyz / d uv = (d xyz / d st) * (d st / d uv) = (d xyz / d st) * (d uv / d st)^-1
This will give you a 3x2 matrix where the first column is the tangent and the second column is the bitangent.
Related
Unfortunately many tutorials describe the TBN matrix as a de-facto must for any type of normal mapping without getting too much into details on why that's the case, which confused me on one particular scenario
Let's assume I need to apply bump/normal mapping on a simple quad on screen, which could later be transformed by it's normal matrix
If the quad's surface normal in "rest position" before any transformation is pointing exactly in positive-z direction (opengl) isn't it sufficient to just transform the vector you read from the normal texture map with the model matrix?
vec3 bumpnormal = texture2D(texture, Coord.xy);
bumpnormal = mat3(model) * bumpnormal; //assuming no scaling occured
I do understand how things would change if we were computing the bumpnormal on a cube without taking in count how different faces with the same texture coordinates actually have different orientations, which leads me to the next question
Assuming that an entire model uses only a single normalmap texture, without any repetition of said texture coordinates in different parts of the model, is it possible to save those 6 floats of the tangent/bitangent vectors stored for each vertex and the computation of the TBN matrix altogheter while getting the same results by simply transforming the bumpnormal with the model's matrix?
If that's the case, why isn't it the preferred solution?
If the quad's surface normal in "rest position" before any transformation is pointing exactly in positive-z direction (opengl) isn't it sufficient to just transform the vector you read from the normal texture map with the model matrix?
No.
Let's say the value you get from the normal map is (1, 0, 0). So that means the normal in the map points right.
So... where is that exactly? Or more to the point, what space are we in when we say "right"?
Now, you might immediately think that right is just +X in model space. But the thing is, it isn't. Why?
Because of your texture coordinates.
If your model-space matrix performs a 90 degree rotation, clockwise, around the model-space Z axis, and you transform your normal by that matrix, then the normal you get should go from (1, 0, 0) to (0, -1, 0). That is what is expected.
But if you have a square facing +Z, and you rotate it by 90 degrees around the Z axis, should that not produce the same result as rotation the texture coordinates? After all, it's the texture coordinates who define what U and V mean relative to model space.
If the top-right texture coordinate of your square is (1, 1), and the bottom left is (0, 0), then "right" in texture space means "right" in model space. But if you change the mapping, so that (1, 1) is at the bottom-right and (0, 0) is at the top-left, then "right" in texture space has become "down" (-Y) in model space.
If you ignore the texture coordinates, the mapping from the model space positions to locations on the texture, then your (1, 0, 0) normal will be still pointing "right" in model space. But your texture mapping says that it should be pointing down (0, -1, 0) in model space. Just like it would have if you rotated model space itself.
With a tangent-space normal map, normals stored in the texture are relative to how the texture is mapped onto a surface. Defining a mapping from model space into the tangent space (the space of the texture's mapping) is what the TBN matrix is for.
This gets more complicated as the mapping between the object and the normals gets more complex. You could fake it for the case of a quad, but for a general figure, it needs to be algorithmic. The mapping is not constant, after all. It involves stretching and skewing as different triangles use different texture coordinates.
Now, there are object-space normal maps, which generate normals that are explicitly in model space. These avoid the need for a tangent-space basis matrix. But it intimately ties a normal map to the object it is used with. You can't even do basic texture coordinate animation, let alone allow a normal map to be used with two separate objects. And they're pretty much unworkable if you're doing bone-weight skinning, since triangles often change sizes.
http://www.thetenthplanet.de/archives/1180
vec3 perturb_normal( vec3 N, vec3 V, vec2 texcoord )
{
// assume N, the interpolated vertex normal and
// V, the view vector (vertex to eye)
vec3 map = texture2D( mapBump, texcoord ).xyz;
#ifdef WITH_NORMALMAP_UNSIGNED
map = map * 255./127. - 128./127.;
#endif
#ifdef WITH_NORMALMAP_2CHANNEL
map.z = sqrt( 1. - dot( map.xy, map.xy ) );
#endif
#ifdef WITH_NORMALMAP_GREEN_UP
map.y = -map.y;
#endif
mat3 TBN = cotangent_frame( N, -V, texcoord );
return normalize( TBN * map );
}
Basically I think you are describing this method. Which I agree is superior in most respects. It makes later calculations much more clean instead of devolving into a mess of space transformation.
Instead of calculating everything into the space of the tangents you just find what the correct world space normal is. That's what I am using in my projects and I am very happy I found this method.
How can I calculate texture coordinates of such geometry?
The angle shown in the image (89.90 degree) may vary, therefore the geometry figure is changing and is not always such uniform.(maybe like geometry in the bottom of image) and red dots are generated procedurally depends on degree of smoothness given.
I would solve it by basic trigonometry.
For simplicity and convenience lets assume:
coordinates [0,0] are in the middle of the geometry (where all the lines there intersect) and in the middle of the texture (and they map to each other - [0,0] in geometry is [0,0] in the texture).
the texture coordinates span from -1 to 1 (and also assume the geometry coordinates do too in the case of 90 degrees - in other cases it may get wider and shorter)
possitive values for x span right and y up. And assume that x geometry axis is aligned with the u texture axis no matter the angle (which is 89.90 in your figures).
Something like this:
Then to transform from texture [u,v] to geometry [x,y] coordinates:
x = u + v*cos(angle)
y = v*sin(angle)
To illustrate, it is basically a shear transformation and scale transformation to preserve length of y (or alternatively - similar to rotation transform, but rotating only one axis - y - not both). If I reverse that transformation (to get the texture coordinates we want):
u = x - y*cotg(angle)
v = y/sin(angle)
With those equations I should be able to transform any geometry coordinates (a point) in the described situation into texture coordinates. For any angle in a (0, 180) range anyway
(Hopefully I didn't make too many embarrassing errors in there)
I would take the easy way out and use either solid texturing or tri-planar [http://gamedevelopment.tutsplus.com/articles/use-tri-planar-texture-mapping-for-better-terrain--gamedev-13821] mapping.
If you really need uv, one option is to start with primitives that have a mapping and carry that over for every operation.
Creating uv after the fact will not get good results.
I want to draw a frustum using GL_LINE_STRIP. What will be my coordinates for these frustum vertices? I have model view and projection matrices. Is it possible to calculate coordinates in shader itself using these matrices?
If you want the world space coordinates for the frustum corners, all you need to do is project the 8 corner points from NDC space (which is going from -1 to 1 in every dimension, so the corner points are easy to enumerate) back to world space. But do not forget that you have to divide by w:
c_world = inverse(projection * view) * vec4(c_ncd, 1);
c_world = c_world*1.0/c_world.w;
While I wrote this in GLSL syntax, this is meant as pseudocode only. You can do it in the shader, but that means that this has to be calculated many times (depending on which shader stage you put this into). It is typically much faster to at least pre-calculate that inverted matrix on the CPU.
After some help, i want to texture onto a circle as you can see below.
I want to do it in such a way that the centre of the circle starts on the shared point of the triangles.
the triangles can change in size and number and will range over varying degrees ie 45, 68, 250 so only the part of the texture visible in the triangle can be seen.
its basically a one to one mapping shift the image to the left and you see only the part where there are triangles.
not sure what this is called or what to google for, can any one makes some suggestions or point me to relevant information.
i was thinking i would have to generate the texture coordinates on the fly to select the relevant part, but it feels like i should be able to do a one to one mapping which would be simpler than calculating triangles on the texture to map to the opengl triangles.
Generating texture coordinates for this isn't difficult. Each point of polygon corresponds to certain angle, so i'th point angle will be i*2*pi/N, where N is the order of regular polygon (number of sides). Then you can use the following to evaluate each point texture coordinates:
texX = (cos(i*2*pi/N)+1)/2
texY = (sin(i*2*pi/N)+1)/2
Well, and the center point has (0.5, 0.5).
It may be even simpler to generate coordinates in the shader, if you have one specially for this:
I assume, you get pos vertex position. It depends on how you store the polygon vertexes, but let the center be (0,0) and other points ranging from (-1;-1) to (1;1). Then the pos should be simply used as texture coordinates with offset:
vec2 texCoords = (pos + vec2(1,1))*0.5;
and the pos itself then should be passed to vector-matrix multiplication as usual.
Is there a way using OpenGL or GLUT to project a point from the model-view matrix into an associated texture matrix? If not, is there a commonly used library that achieves this? I want to modify the texture of an object according to a ray cast in 3D space.
The simplest case would be:
A ray is cast which intersects a quad, mapped with a single texture.
The point of intersection is converted to a value in texture space clamped between [0.0,1.0] in the x and y axis.
A 3x3 patch of pixels centered around the rounded value of the resulting texture point is set to an alpha value of 0.( or another RGBA value which is convenient, for the desired effect).
To illustrate here is a more complex version of the question using a sphere, the pink box shows the replaced pixels.
I just specify texture points for mapping in OpenGL, I don't actually know how the pixels are projected onto the sphere. Basically I need to to the inverse of that projection, but I don't quite know how to do that math, especially on more complex shapes like a sphere or an arbitrary convex hull. I assume that you can somehow find a planar polygon that makes up the shape, which the ray is intersecting, and from there the inverse projection of a quad or triangle would be trivial.
Some equations, articles and/or example code would be nice.
There are a few ways you could accomplish what you're trying to do:
Project a world coordinate point into normalized device coordinates (NDCs) by doing the model-view and projection transformation matrix multiplications by yourself (or if you're using old-style OpenGL, call gluProject), and perform the perspective division step. If you use a depth coordinate of zero, this would correspond to intersecting your ray at the imaging plane. The only other correction you'd need to do map from NDCs (which are in the range [-1,1] in x and y) into texture space by dividing the resulting coordinate by two, and then shifting by .5.
Skip the ray tracing all together, and bind your texture as a framebuffer attachment to a framebuffer object, and then render a big point (or sprite) that modifies the colors in the neighborhood of the intersection as you want. You could use the same model-view and projection matrices, and will (probably) only need to update the viewport to match the texture resolution.
So I found a solution that is a little complicated, but does the trick.
For complex geometry you must determine which quad or triangle was intersected, and use this as the plane. The quad must be planar(obviously).
Draw a plane in the identity matrix with dimensions 1x1x0, map the texture on points identical to the model geometry.
Transform the plane, and store the inverse of each transform matrix in a stack
Find the point at which the the plane is intersected
Transform this point using the inverse matrix stack until it returns to identity matrix(it should have no depth(
Convert this point from 1x1 space into pixel space by multiplying the point by the number of pixels and rounding. Or start your 2D combining logic here.