In the book of 3D graphics for game programming by JungHyun Han, at page 38-39, it is given that
the basis transformation matrix from e_1, e_2, e_3 to u,v,n is
However, this contradicts with what I know from linear algebra. I mean shouldn't the basis-transformation matrix be the transpose of that matrix ?
Note that the author does his derivation, but I couldn't find where is the missing point between what I know and what the author does.
The code:
Vertex Shader:
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 color;
uniform vec3 cameraPosition;
uniform vec3 AT;
uniform vec3 UP;
uniform mat4 worldTrans;
vec3 ep_1 = ( cameraPosition - AT )/ length(cameraPosition - AT);
vec3 ep_2 = ( cross(UP, ep_1) )/length( cross(UP, ep_1 ));
vec3 ep_3 = cross(ep_1, ep_2);
vec4 t_ep_1 = vec4(ep_1, -1.0f);
vec4 t_ep_2 = vec4(ep_2, cameraPosition.y);
vec4 t_ep_3 = vec4(ep_3, cameraPosition.z);
mat4 viewTransform = mat4(t_ep_1, t_ep_2, t_ep_3, vec4(0.0f, 0.0f, 0.0f, 1.0f));
smooth out vec4 fragColor;
void main()
{
gl_Position = transpose(viewTransform) * position;
fragColor = color;
}
)glsl";
Inputs:
GLuint transMat = glGetUniformLocation(m_Program.m_shaderProgram, "worldTrans");
GLfloat dArray[16] = {0.0};
dArray[0] = 1;
dArray[3] = 0.5;
dArray[5] = 1;
dArray[7] = 0.5;
dArray[10] = 1;
dArray[11] = 0;
dArray[15] = 1;
glUniformMatrix4fv(transMat, 1, GL_TRUE, &dArray[0]);
GLuint cameraPosId = glGetUniformLocation(m_Program.m_shaderProgram, "cameraPosition");
GLuint ATId = glGetUniformLocation(m_Program.m_shaderProgram, "AT");
GLuint UPId = glGetUniformLocation(m_Program.m_shaderProgram, "UP");
const float cameraPosition[4] = {2.0f, 0.0f, 0.0f};
const float AT[4] = {1.0f, 0.0f, 0.0f};
const float UP[4] = {0.0f, 0.0f, 1.0f};
glUniform3fv(cameraPosId, 1, cameraPosition);
glUniform3fv(ATId, 1, AT);
glUniform3fv(UPId, 1, UP);
While it's true that a rotation, scaling or deformation can be expressed by a 4x4 matrix in the form
what your are reading about is the so called "View Transformation"
To achieve this matrix we need two transformations: First, translate to the camera position, and then rotate the camera.
The data to do these transformations are:
Camera position C (Cx,Cy,Cz)
Target position T (Tx,Ty,Tz)
Camera-up normalized UP (Ux, Uy, Uz)
The translation can be expressed by
For the rotation we define:
F = T – C and after normalizating it we get f = F / ||T-C||, also expressed by f= normalize(F)
s = normalize(cross (f, UP))
u = normalize(cross(s, f))
s, u, -f are the new axis expressed in the old coordinates system.
Thus we can build the rotation matrix for this transformation as
Combining the two transformations in an only matrix we get:
Notice that the axis system is the one used by OpenGL, where -f= cross(s,u).
Now, comparing with your GLSL code I see:
Your f(ep_1) vector goes in the oposite direction.
The s(ep_2) vector is calculated as cross(UP, f) instead of cross(f, UP). This is right, because of 1).
Same for u(ep_3).
The building of V cell (0,0) is wrong. It tries to set the proper direction by using that -1.0f.
For the other cells (t_ep_J components), the camera position is used. But you forgot to use the dot product of it with s,u,f.
The GLSL initializer mat4(c1,c2,c3,c4) requires column-vectors as parameters. You passed row-colums. But, after, in main the use of transpose corrects it.
On a side note, you are not going to calculate the matrix for each vertex, time and time and time... right? Better calculate it on the CPU side and pass if (in column order) as an uniform.
Apparently, the change of basis in a vector space does changes the vectors in that vector space, and this is not what we want in here.
Therefore, the mathematics that I was applying does not valid in here.
To understand more about why we use the matrix that I have given in the question, please see this question.
Related
I am currently using this glsl file to handle lighting for a 3d object that I am trying to display. I am not sure what values I need to put in for light_position_world, Ls, Ld, La, Ks, Kd, Ka, Ia and fragment_color. The scene I am trying to illuminate is centered at (427, 385, 89) roughly. I dont need it to be perfect but I need some values that will let me see my design on screen so that I can manipulate them and learn how this all works. Any additional tips or explanation would be much appreciated. Thanks!
#version 410
in vec3 position_eye, normal_eye;
uniform mat4 view_mat;
// fixed point light properties
vec3 light_position_world = vec3 (427.029, 385.888, 0);
vec3 Ls = vec3 (1.0f, 0.0f, 0.0f);
vec3 Ld = vec3 (1.0f, 0.0f, 0.0f);
vec3 La = vec3 (1.0f, 0.2f, 0.0f);
// surface reflectance
vec3 Ks = vec3 (1.0f, 1.0f, 1.0f);
vec3 Kd = vec3 (1.0f, 0.8f, 0.72f);
vec3 Ka = vec3 (1.0f, 1.0f, 1.0f);
float specular_exponent = 10.0; // specular 'power'
out vec4 fragment_colour; // final colour of surface
void main () {
// ambient intensity
vec3 Ia = vec3 (0, 0, 0);
// diffuse intensity
// raise light position to eye space
vec3 light_position_eye = light_position_world; //vec3 (view_mat * vec4 (light_position_world, 1.0));
vec3 distance_to_light_eye = light_position_eye - position_eye;
vec3 direction_to_light_eye = normalize (distance_to_light_eye);
float dot_prod = dot (direction_to_light_eye, normal_eye);
dot_prod = max (dot_prod, 0.0);
vec3 Id = Ld * Kd * dot_prod; // final diffuse intensity
// specular intensity
vec3 surface_to_viewer_eye = normalize (-position_eye);
// blinn
vec3 half_way_eye = normalize (surface_to_viewer_eye + direction_to_light_eye);
float dot_prod_specular = max (dot (half_way_eye, normal_eye), 0.0);
float specular_factor = pow (dot_prod_specular, specular_exponent);
vec3 Is = Ls * Ks * specular_factor; // final specular intensity
// final colour
fragment_colour = vec4 (255, 25, 25, 0);
}
There are a few problems with your code.
1) Assuming, light_position_world is the position of the light in world space, the light is below your scene. So the scene won't be illuminated from above.
2) Assuming, *_eye means a coordinate in eye space and *_world is a coordinate in world space, you may not interchange between those spaces by simply assigning vectors. You have to use a view matrix and it's inverse view matrix to go from world to eye space and from eye space to world space respectivly.
3) The output color of the shader, fragment_colour, is always set to a dark red-ish color. So the compiler will just leave out all the lighting calculations. You have to use something like this: fragment_colour = Ia + Id * material + Is * material, where material is the color of your material - e.g. gray for metal.
It seems like you don't understand the underlying basics. I suggest you read a few articles or tutorials about lighting and transformation/maths in OpenGL.
If you have consumed a fair bit of literature, experiment with your code. Try out, what different calculations do and how they influence the end product. You won't get 100% physically accurate lighting anyways, so there's nothing to go wrong.
I'm implementing SSAO in OpenGL, following this tutorial: Jhon Chapman SSAO
Basically the technique described uses an Hemispheric kernel which is oriented along the fragment's normal. The view space z position of the sample is then compared to its screen space depth buffer value.
If the value in the depth buffer is higher, it means the sample ended up in a geometry so this fragment should be occluded.
The goal of this technique is to get rid of the classic implementation artifact where objects flat faces are greyed out.
I've have the same implementation with 2 small differencies
I'm not using a Noise texture to rotate my kernel, so I have banding artifacts, that's fine for now
I don't have access to a buffer with Per-pixel normals, so I have to compute my normal and TBN matrix only using the depth buffer.
The algorithm seems to be working fine, I can see the fragments being occluded, BUT I still have my faces greyed out...
IMO it's coming from the way I'm calculating my TBN matrix. The normals look OK but something must be wrong as my kernel doesn't seem to be properly aligned causing samples to end up in the faces.
Screenshots are with a Kernel of 8 samples and a radius of .1. the first is only the result of SSAO pass and the second one is the debug render of the generated normals.
Here is the code for the function that computes the Normal and TBN Matrix
mat3 computeTBNMatrixFromDepth(in sampler2D depthTex, in vec2 uv)
{
// Compute the normal and TBN matrix
float ld = -getLinearDepth(depthTex, uv);
vec3 x = vec3(uv.x, 0., ld);
vec3 y = vec3(0., uv.y, ld);
x = dFdx(x);
y = dFdy(y);
x = normalize(x);
y = normalize(y);
vec3 normal = normalize(cross(x, y));
return mat3(x, y, normal);
}
And the SSAO shader
#include "helper.glsl"
in vec2 vertTexcoord;
uniform sampler2D depthTex;
const int MAX_KERNEL_SIZE = 8;
uniform vec4 gKernel[MAX_KERNEL_SIZE];
// Kernel Radius in view space (meters)
const float KERNEL_RADIUS = .1;
uniform mat4 cameraProjectionMatrix;
uniform mat4 cameraProjectionMatrixInverse;
out vec4 FragColor;
void main()
{
// Get the current depth of the current pixel from the depth buffer (stored in the red channel)
float originDepth = texture(depthTex, vertTexcoord).r;
// Debug linear depth. Depth buffer is in the range [1.0];
float oLinearDepth = getLinearDepth(depthTex, vertTexcoord);
// Compute the view space position of this point from its depth value
vec4 viewport = vec4(0,0,1,1);
vec3 originPosition = getViewSpaceFromWindow(cameraProjectionMatrix, cameraProjectionMatrixInverse, viewport, vertTexcoord, originDepth);
mat3 lookAt = computeTBNMatrixFromDepth(depthTex, vertTexcoord);
vec3 normal = lookAt[2];
float occlusion = 0.;
for (int i=0; i<MAX_KERNEL_SIZE; i++)
{
// We align the Kernel Hemisphere on the fragment normal by multiplying all samples by the TBN
vec3 samplePosition = lookAt * gKernel[i].xyz;
// We want the sample position in View Space and we scale it with the kernel radius
samplePosition = originPosition + samplePosition * KERNEL_RADIUS;
// Now we need to get sample position in screen space
vec4 sampleOffset = vec4(samplePosition.xyz, 1.0);
sampleOffset = cameraProjectionMatrix * sampleOffset;
sampleOffset.xyz /= sampleOffset.w;
// Now to get the depth buffer value at the projected sample position
sampleOffset.xyz = sampleOffset.xyz * 0.5 + 0.5;
// Now can get the linear depth of the sample
float sampleOffsetLinearDepth = -getLinearDepth(depthTex, sampleOffset.xy);
// Now we need to do a range check to make sure that object
// outside of the kernel radius are not taken into account
float rangeCheck = abs(originPosition.z - sampleOffsetLinearDepth) < KERNEL_RADIUS ? 1.0 : 0.0;
// If the fragment depth is in front so it's occluding
occlusion += (sampleOffsetLinearDepth >= samplePosition.z ? 1.0 : 0.0) * rangeCheck;
}
occlusion = 1.0 - (occlusion / MAX_KERNEL_SIZE);
FragColor = vec4(vec3(occlusion), 1.0);
}
Update 1
This variation of the TBN calculation function gives the same results
mat3 computeTBNMatrixFromDepth(in sampler2D depthTex, in vec2 uv)
{
// Compute the normal and TBN matrix
float ld = -getLinearDepth(depthTex, uv);
vec3 a = vec3(uv, ld);
vec3 x = vec3(uv.x + dFdx(uv.x), uv.y, ld + dFdx(ld));
vec3 y = vec3(uv.x, uv.y + dFdy(uv.y), ld + dFdy(ld));
//x = dFdx(x);
//y = dFdy(y);
//x = normalize(x);
//y = normalize(y);
vec3 normal = normalize(cross(x - a, y - a));
vec3 first_axis = cross(normal, vec3(1.0f, 0.0f, 0.0f));
vec3 second_axis = cross(first_axis, normal);
return mat3(normalize(first_axis), normalize(second_axis), normal);
}
I think the problem is probably that you are mixing coordinate systems. You are using texture coordinates in combination with the linear depth. You can imagine two vertical surfaces facing slightly to the left of the screen. Both have the same angle from the vertical plane and should thus have the same normal right?
But let's then imagine that one of these surfaces are much further from the camera. Since fFdx/fFdy functions basically tell you the difference from the neighbor pixel, the surface far away from the camera will have greater linear depth difference over one pixel, than the surface close to the camera. But the uv.x / uv.y derivative will have the same value. That means that you will get different normals depending on the distance from the camera.
The solution is to calculate the view coordinate and use the derivative of that to calculate the normal.
vec3 viewFromDepth(in sampler2D depthTex, in vec2 uv, in vec3 view)
{
float ld = -getLinearDepth(depthTex, uv);
/// I assume ld is negative for fragments in front of the camera
/// not sure how getLinearDepth is implemented
vec3 z_scaled_view = (view / view.z) * ld;
return z_scaled_view;
}
mat3 computeTBNMatrixFromDepth(in sampler2D depthTex, in vec2 uv, in vec3 view)
{
vec3 view = viewFromDepth(depthTex, uv);
vec3 view_normal = normalize(cross(dFdx(view), dFdy(view)));
vec3 first_axis = cross(view_normal, vec3(1.0f, 0.0f, 0.0f));
vec3 second_axis = cross(first_axis, view_normal);
return mat3(view_normal, normalize(first_axis), normalize(second_axis));
}
I am trying to implement a simple projective texture mapping approach by using shaders in OpenGL 3+. While there are some examples on the web I am having trouble creating a working example with shaders.
I am actually planning on using two shaders, one which does a normal scene draw, and another for projective texture mapping. I have a function for drawing a scene void ProjTextureMappingScene::renderScene(GLFWwindow *window) and I am using glUseProgram() to switch between shaders. The normal drawing works fine. However, it is unclear to me how I am supposed to render the projective texture on top of an already textured cube. Do I somehow have to use a stencil buffer or a framebuffer object(the rest of the scene should be unaffected)?
I also don't think that my projective texture mapping shaders are correct since the second time I render a cube it shows black. Further, I tried to debug by using colors and only the t component of the shader seems to be non-zero(so the cube appears green). I am overriding the texColor in the fragment shader below just for debugging purposes.
VertexShader
#version 330
uniform mat4 TexGenMat;
uniform mat4 InvViewMat;
uniform mat4 P;
uniform mat4 MV;
uniform mat4 N;
layout (location = 0) in vec3 inPosition;
//layout (location = 1) in vec2 inCoord;
layout (location = 2) in vec3 inNormal;
out vec3 vNormal, eyeVec;
out vec2 texCoord;
out vec4 projCoords;
void main()
{
vNormal = (N * vec4(inNormal, 0.0)).xyz;
vec4 posEye = MV * vec4(inPosition, 1.0);
vec4 posWorld = InvViewMat * posEye;
projCoords = TexGenMat * posWorld;
// only needed for specular component
// currently not used
eyeVec = -posEye.xyz;
gl_Position = P * MV * vec4(inPosition, 1.0);
}
FragmentShader
#version 330
uniform sampler2D projMap;
uniform sampler2D gSampler;
uniform vec4 vColor;
in vec3 vNormal, lightDir, eyeVec;
//in vec2 texCoord;
in vec4 projCoords;
out vec4 outputColor;
struct DirectionalLight
{
vec3 vColor;
vec3 vDirection;
float fAmbientIntensity;
};
uniform DirectionalLight sunLight;
void main (void)
{
// supress the reverse projection
if (projCoords.q > 0.0)
{
vec2 finalCoords = projCoords.st / projCoords.q;
vec4 vTexColor = texture(gSampler, finalCoords);
// only t has non-zero values..why?
vTexColor = vec4(finalCoords.s, finalCoords.t, finalCoords.r, 1.0);
//vTexColor = vec4(projCoords.s, projCoords.t, projCoords.r, 1.0);
float fDiffuseIntensity = max(0.0, dot(normalize(vNormal), -sunLight.vDirection));
outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
}
}
Creation of TexGen Matrix
biasMatrix = glm::mat4(0.5f, 0, 0, 0.5f,
0, 0.5f, 0, 0.5f,
0, 0, 0.5f, 0.5f,
0, 0, 0, 1);
// 4:3 perspective with 45 fov
projectorP = glm::perspective(45.0f * zoomFactor, 4.0f / 3.0f, 0.1f, 1000.0f);
projectorOrigin = glm::vec3(-3.0f, 3.0f, 0.0f);
projectorTarget = glm::vec3(0.0f, 0.0f, 0.0f);
projectorV = glm::lookAt(projectorOrigin, // projector origin
projectorTarget, // project on object at origin
glm::vec3(0.0f, 1.0f, 0.0f) // Y axis is up
);
mModel = glm::mat4(1.0f);
...
texGenMatrix = biasMatrix * projectorP * projectorV * mModel;
invViewMatrix = glm::inverse(mModel*mModelView);
Render Cube Again
It is also unclear to me what the modelview of the cube should be? Should it use the view matrix from the slide projector(as it is now) or the normal view projector? Currently the cube is rendered black(or green if debugging) in the middle of the scene view, as it would appear from the slide projector(I made a toggle hotkey so that I can see what the slide projector "sees"). The cube also moves with the view. How do I get the projection unto the cube itself?
mModel = glm::translate(projectorV, projectorOrigin);
// bind projective texture
tTextures[2].bindTexture();
// set all uniforms
...
// bind VBO data and draw
glBindVertexArray(uiVAOSceneObjects);
glDrawArrays(GL_TRIANGLES, 6, 36);
Switch between main scene camera and slide projector camera
if (useMainCam)
{
mCurrent = glm::mat4(1.0f);
mModelView = mModelView*mCurrent;
mProjection = *pipeline->getProjectionMatrix();
}
else
{
mModelView = projectorV;
mProjection = projectorP;
}
I have solved the problem. One issue I had is that I confused the matrices in the two camera systems (world and projective texture camera). Now when I set the uniforms for the projective texture mapping part I use the correct matrices for the MVP values - the same ones I use for the world scene.
glUniformMatrix4fv(iPTMProjectionLoc, 1, GL_FALSE, glm::value_ptr(*pipeline->getProjectionMatrix()));
glUniformMatrix4fv(iPTMNormalLoc, 1, GL_FALSE, glm::value_ptr(glm::transpose(glm::inverse(mCurrent))));
glUniformMatrix4fv(iPTMModelViewLoc, 1, GL_FALSE, glm::value_ptr(mCurrent));
glUniformMatrix4fv(iTexGenMatLoc, 1, GL_FALSE, glm::value_ptr(texGenMatrix));
glUniformMatrix4fv(iInvViewMatrix, 1, GL_FALSE, glm::value_ptr(invViewMatrix));
Further, the invViewMatrix is just the inverse of the view matrix not the model view (this didn't change the behaviour in my case, since the model was identity, but it is wrong). For my project I only wanted to selectively render a few objects with projective textures. To do this, for each object, I must make sure that the current shader program is the one for projective textures using glUseProgram(projectiveTextureMappingProgramID). Next, I compute the required matrices for this object:
texGenMatrix = biasMatrix * projectorP * projectorV * mModel;
invViewMatrix = glm::inverse(mView);
Coming back to the shaders, the vertex shader is correct except that I re-added the UV texture coordinates (inCoord) for the current object and stored them in texCoord.
For the fragment shader I changed the main function to clamp the projective texture so that it doesn't repeat (I couldn't get it to work with the client side GL_CLAMP_TO_EDGE) and I am also using the default object texture and UV coordinates in case the projector does not cover the whole object (I also removed lighting from the projective texture since it is not needed in my case):
void main (void)
{
vec2 finalCoords = projCoords.st / projCoords.q;
vec4 vTexColor = texture(gSampler, texCoord);
vec4 vProjTexColor = texture(projMap, finalCoords);
//vec4 vProjTexColor = textureProj(projMap, projCoords);
float fDiffuseIntensity = max(0.0, dot(normalize(vNormal), -sunLight.vDirection));
// supress the reverse projection
if (projCoords.q > 0.0)
{
// CLAMP PROJECTIVE TEXTURE (for some reason gl_clamp did not work...)
if(projCoords.s > 0 && projCoords.t > 0 && finalCoords.s < 1 && finalCoords.t < 1)
//outputColor = vProjTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
outputColor = vProjTexColor*vColor;
else
outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
}
else
{
outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
}
}
If you are stuck and for some reason can not get the shaders to work, you can check out an example in "OpenGL 4.0 Shading Language Cookbook" (textures chapter) - I actually missed this, until I got it working by myself.
In addition to all of the above, a great help for debugging if the algorithm is working correctly was to draw the frustum (as wireframe) for the projective camera. I used a shader for frustum drawing. The fragment shader just assigns a solid color, while the vertex shader is listed below with explanations:
#version 330
// input vertex data
layout(location = 0) in vec3 vp;
uniform mat4 P;
uniform mat4 MV;
uniform mat4 invP;
uniform mat4 invMV;
void main()
{
/*The transformed clip space position c of a
world space vertex v is obtained by transforming
v with the product of the projection matrix P
and the modelview matrix MV
c = P MV v
So, if we could solve for v, then we could
genrerate vertex positions by plugging in clip
space positions. For your frustum, one line
would be between the clip space positions
(-1,-1,near) and (-1,-1,far),
the lower left edge of the frustum, for example.
NB: If you would like to mix normalized device
coords (x,y) and eye space coords (near,far),
you need an additional step here. Modify your
clip position as follows
c' = (c.x * c.z, c.y * c.z, c.z, c.z)
otherwise you would need to supply both the z
and w for c, which might be inconvenient. Simply
use c' instead of c below.
To solve for v, multiply both sides of the equation above with
-1
(P MV)
This gives
-1
(P MV) c = v
This is equivalent to
-1 -1
MV P c = v
-1
P is given by
|(r-l)/(2n) 0 0 (r+l)/(2n) |
| 0 (t-b)/(2n) 0 (t+b)/(2n) |
| 0 0 0 -1 |
| 0 0 -(f-n)/(2fn) (f+n)/(2fn)|
where l, r, t, b, n, and f are the parameters in the glFrustum() call.
If you don't want to fool with inverting the
model matrix, the info you already have can be
used instead: the forward, right, and up
vectors, in addition to the eye position.
First, go from clip space to eye space
-1
e = P c
Next go from eye space to world space
v = eyePos - forward*e.z + right*e.x + up*e.y
assuming x = right, y = up, and -z = forward.
*/
vec4 fVp = invMV * invP * vec4(vp, 1.0);
gl_Position = P * MV * fVp;
}
The uniforms are used like this (make sure you use the right matrices):
// projector matrices
glUniformMatrix4fv(iFrustumInvProjectionLoc, 1, GL_FALSE, glm::value_ptr(glm::inverse(projectorP)));
glUniformMatrix4fv(iFrustumInvMVLoc, 1, GL_FALSE, glm::value_ptr(glm::inverse(projectorV)));
// world camera
glUniformMatrix4fv(iFrustumProjectionLoc, 1, GL_FALSE, glm::value_ptr(*pipeline->getProjectionMatrix()));
glUniformMatrix4fv(iFrustumModelViewLoc, 1, GL_FALSE, glm::value_ptr(mModelView));
To get the input vertices needed for the frustum's vertex shader you can do the following to get the coordinates (then just add them to your vertex array):
glm::vec3 ftl = glm::vec3(-1, +1, pFar); //far top left
glm::vec3 fbr = glm::vec3(+1, -1, pFar); //far bottom right
glm::vec3 fbl = glm::vec3(-1, -1, pFar); //far bottom left
glm::vec3 ftr = glm::vec3(+1, +1, pFar); //far top right
glm::vec3 ntl = glm::vec3(-1, +1, pNear); //near top left
glm::vec3 nbr = glm::vec3(+1, -1, pNear); //near bottom right
glm::vec3 nbl = glm::vec3(-1, -1, pNear); //near bottom left
glm::vec3 ntr = glm::vec3(+1, +1, pNear); //near top right
glm::vec3 frustum_coords[36] = {
// near
ntl, nbl, ntr, // 1 triangle
ntr, nbl, nbr,
// right
nbr, ftr, ntr,
ftr, nbr, fbr,
// left
nbl, ftl, ntl,
ftl, nbl, fbl,
// far
ftl, fbl, fbr,
fbr, ftr, ftl,
//bottom
nbl, fbr, fbl,
fbr, nbl, nbr,
//top
ntl, ftr, ftl,
ftr, ntl, ntr
};
After all is said and done, it's nice to see how it looks:
As you can see I applied two projective textures, one of a biohazard image on Blender's Suzanne monkey head, and a smiley texture on the floor and a small cube. You can also see that the cube is partly covered by the projective texture, while the rest of it appears with its default texture. Finally, you can see the green frustum wireframe for the projector camera - and everything looks correct.
I'm working on an implementation of normal mapping, calculating the tangent vectors via the ASSIMP library.
The normal mapping seems to work perfectly on objects that have a model matrix close to the identity matrix. As long as I start translating and scaling, my lighting seems off. As you can see in the picture the normal mapping works perfectly on the container cube, but the lighting fails on the large floor (direction of the specular light should be towards the player, not towards the container).
I get the feeling it somehow has something to do with the position of the light (currently traversing from x = -10 to x = 10 over time) not properly being included in the calculations as long as I start changing the model matrix (via translations/scaling). I'm posting all the relevant code and hope you guys can somehow see something I'm missing since I've been staring at my code for days.
Vertex shader
#version 330
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec3 tangent;
layout(location = 3) in vec3 color;
layout(location = 4) in vec2 texCoord;
// fragment pass through
out vec3 Position;
out vec3 Normal;
out vec3 Tangent;
out vec3 Color;
out vec2 TexCoord;
out vec3 TangentSurface2Light;
out vec3 TangentSurface2View;
uniform vec3 lightPos;
// vertex transformation
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
mat3 normalMatrix = transpose(mat3(inverse(view * model)));
Position = vec3((view * model) * vec4(position, 1.0));
Normal = normalMatrix * normal;
Tangent = tangent;
Color = color;
TexCoord = texCoord;
gl_Position = projection * view * model * vec4(position, 1.0);
vec3 light = vec3(view * vec4(lightPos, 1.0));
vec3 n = normalize(normalMatrix * normal);
vec3 t = normalize(normalMatrix * tangent);
vec3 b = cross(n, t);
mat3 mat = mat3(t.x, b.x ,n.x, t.y, b.y ,n.y, t.z, b.z ,n.z);
vec3 vector = normalize(light - Position);
TangentSurface2Light = mat * vector;
vector = normalize(-Position);
TangentSurface2View = mat * vector;
}
Fragment shader
#version 330
in vec3 Position;
in vec3 Normal;
in vec3 Tangent;
in vec3 Color;
in vec2 TexCoord;
in vec3 TangentSurface2Light;
in vec3 TangentSurface2View;
out vec4 outColor;
uniform vec3 lightPos;
uniform mat4 view;
uniform sampler2D texture0;
uniform sampler2D texture_normal; // normal
uniform float repeatFactor = 1;
void main()
{
vec4 texColor = texture(texture0, TexCoord * repeatFactor);
vec3 light = vec3(view * vec4(lightPos, 1.0));
float dist = length(light - Position);
float att = 1.0 / (1.0 + 0.01 * dist + 0.001 * dist * dist);
// Ambient
vec4 ambient = vec4(0.2);
// Diffuse
vec3 surface2light = normalize(TangentSurface2Light);
vec3 norm = normalize(texture(texture_normal, TexCoord * repeatFactor).xyz * 2.0 - 1.0);
float contribution = max(dot(norm, surface2light), 0.0);
vec4 diffuse = contribution * vec4(0.8);
// Specular
vec3 surf2view = normalize(TangentSurface2View);
vec3 reflection = reflect(-surface2light, norm); // reflection vector
float specContribution = pow(max(dot(surf2view, reflection), 0.0), 32);
vec4 specular = vec4(0.6) * specContribution;
outColor = (ambient + (diffuse * att)+ (specular * pow(att, 3))) * texColor;
}
OpenGL Drawing Code
void Render()
{
...
glm::mat4 view, projection; // Model will be done via MatrixStack
view = glm::lookAt(position, position + direction, up); // cam pos, look at (eye pos), up vec
projection = glm::perspective(45.0f, (float)width/(float)height, 0.1f, 1000.0f);
glUniformMatrix4fv(glGetUniformLocation(basicShader.shaderProgram, "view"), 1, GL_FALSE, glm::value_ptr(view));
glUniformMatrix4fv(glGetUniformLocation(basicShader.shaderProgram, "projection"), 1, GL_FALSE, glm::value_ptr(projection));
// Lighting
lightPos.x = 0.0 + sin(time / 125) * 10;
glUniform3f(glGetUniformLocation(basicShader.shaderProgram, "lightPos"), lightPos.x, lightPos.y, lightPos.z);
// Objects (use bump mapping on this cube)
bumpShader.Use();
glUniformMatrix4fv(glGetUniformLocation(bumpShader.shaderProgram, "view"), 1, GL_FALSE, glm::value_ptr(view));
glUniformMatrix4fv(glGetUniformLocation(bumpShader.shaderProgram, "projection"), 1, GL_FALSE, glm::value_ptr(projection));
glUniform3f(glGetUniformLocation(bumpShader.shaderProgram, "lightPos"), lightPos.x, lightPos.y, lightPos.z);
MatrixStack::LoadIdentity();
MatrixStack::Scale(2);
MatrixStack::ToShader(glGetUniformLocation(bumpShader.shaderProgram, "model"));
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, resources.GetTexture("container"));
glUniform1i(glGetUniformLocation(bumpShader.shaderProgram, "img"), 0);
glActiveTexture(GL_TEXTURE1); // Normal map
glBindTexture(GL_TEXTURE_2D, resources.GetTexture("container_normal"));
glUniform1i(glGetUniformLocation(bumpShader.shaderProgram, "normalMap"), 1);
glUniform1f(glGetUniformLocation(bumpShader.shaderProgram, "repeatFactor"), 1);
cubeNormal.Draw();
MatrixStack::LoadIdentity();
MatrixStack::Translate(glm::vec3(0.0f, -22.0f, 0.0f));
MatrixStack::Scale(glm::vec3(200.0f, 20.0f, 200.0f));
MatrixStack::ToShader(glGetUniformLocation(bumpShader.shaderProgram, "model"));
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, resources.GetTexture("floor"));
glActiveTexture(GL_TEXTURE1); // Normal map
glBindTexture(GL_TEXTURE_2D, resources.GetTexture("floor_normal"));
glUniform1f(glGetUniformLocation(bumpShader.shaderProgram, "repeatFactor"), 100);
cubeNormal.Draw();
MatrixStack::LoadIdentity();
glActiveTexture(GL_TEXTURE0);
...
}
EDIT
I now loaded my objects using the ASSIMP library with the 'aiProcess_CalcTangentSpace' flag enabled and changed my shaders accordingly to adapt to the new changes. Since ASSIMP now automatically calculates the correct tangent vectors I should have valid tangent vectors and my problem should be solved (as noted by Nicol Bolas), but I still have the same issue with the specular lighting acting strange and the diffuse lighting not really showing up. I guess there is still something else that is not working correctly. I unmarked your answer as the correct answer Nicol Bolas (for now) and updated my code accordingly since there is still something I'm missing.
It probably has something to do with translating. As soon as I add a translate (-22.0f in y direction) to the Model matrix it reacts with strange lighting. As long as the floor (which is actually a cube) has no translation the lighting looks fine.
calculating the tangent vectors in the vertex shader
Well there's your problem. That's not possible for an arbitrary surface.
The tangent and bitangent are not arbitrary vectors that are perpendicular to one another. They are model-space direction vectors that point in the direction of the texture coordinates. The tangent points in the direction of the S texture coordinate, and the bitangent points in the direction of the T texture coordinate (or U and V for the tex coords, if you prefer).
This effectively computes the orientation of the texture relative to each vertex on the surface. You need this orientation, because the way the texture is mapped to the surface matters when you want to make sense of a tangent-space vector.
Remember: tangent-space is the space perpendicular to a surface. But you need to know how that surface is mapped to the object in order to know where "up" is, for example. Take a square surface. You could map a texture so that the +Y part of the square is oriented along the +T direction of the texture. Or it could be along the +X of the square. You could even map it so that the texture is distorted, or rotated at an arbitrary angle.
The tangent and bitangent vectors are intended to correct for this mapping. They point in the S and T directions in model space. So, combined with the normal, they form a transformation matrix to transform from tangent-space into whatever space the 3 vectors are in (you generally transform the NBT to camera space or whatever space you use for lighting before using them).
You cannot compute them by just taking the normal and crossing it with some arbitrary vector. That produces a perpendicular normal, but not the right one.
In order to correctly compute the tangent/bitangent, you need access to more than one vertex. You need to be able to see how the texture coordinates change over the surface of the mesh, which is how you compute the S and T directions relative to the mesh.
Vertex shaders cannot access more than one vertex. Geometry shaders can't (generally) access enough vertices to do this either. Compute the tangent/bitangent off-line on the CPU.
mat3 mat = mat3(t.x, b.x ,n.x, t.y, b.y ,n.y, t.z, b.z ,n.z);
Is wrong. In order to use the tbn matrix correctly, you must transpose it,like so:
mat3 mat = transpose(mat3(t.x, b.x ,n.x, t.y, b.y ,n.y, t.z, b.z ,n.z));
then use it to transform your light and view vectors into tangent space. Alternatively(and less efficiently), pass the untransposed tbn matrix to the fragment shader, and use it to transform the sampled normal into view space. It's an easy thing to miss, but very important. See http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-13-normal-mapping/ for more info.
On a side note, a minor optimisation you can do for your vertex shader, is to calculate the normal matrix on the cpu, per mesh, as it will be the same for all vertices on the mesh, and so reduce unnecessary calculations.
I'm following this tutorial to learn something more about OpenGL and in particular point sprites. But I'm stuck on one of the exercises at the end of the page:
Try to rotate the point sprites 45 degrees by changing the fragment shader.
There are no hints about this sort of thing in the chapter, nor in the previous ones. And I didn't find any documentation on how to do it. These are my vertex and fragment shaders:
Vertex Shader
#version 140
attribute vec2 coord2d;
varying vec4 f_color;
uniform float offset_x;
uniform float scale_x;
uniform float point_size;
void main(void) {
gl_Position = vec4((coord2d.x + offset_x) * scale_x, coord2d.y, 0.0, 1.0);
f_color = vec4(coord2d.xy / 2.0 + 0.5, 1.0, 1.0);
gl_PointSize = point_size;
}
Fragment Shader
#version 140
varying vec4 f_color;
uniform sampler2D texture;
void main(void) {
gl_FragColor = texture2D(texture, gl_PointCoord) * f_color;
}
I thought about using a 2x2 matrix in the FS to rotate the gl_PointCoord, but I have no idea how to fill the matrix to accomplish it. Should I pass it directly to the FS as a uniform?
The traditional method is to pass a matrix to the shader, whether vertex or fragment. If you don't know how to fill in a rotation matrix, Google and Wikipedia can help.
The main thing is that you're going to run into is the simple fact that a 2D rotation is not enough. gl_PointCoord goes from [0, 1]. A pure rotation matrix rotates around the origin, which is the bottom-left in point-coord space. So you need more than a pure rotation matrix.
You need a 3x3 matrix, which has part rotation and part translation. This matrix should be generated as follows (using GLM for math stuff):
glm::mat4 currMat(1.0f);
currMat = glm::translate(currMat, glm::vec3(0.5f, 0.5f, 0.0f));
currMat = glm::rotate(currMat, angle, glm::vec3(0.0f, 0.0f, 1.0f));
currMat = glm::translate(currMat, glm::vec3(-0.5f, -0.5f, 0.0f));
You then pass currMat to the shader as a 4x4 matrix. Your shader does this:
vec2 texCoord = (rotMatrix * vec4(gl_PointCoord, 0, 1)).xy
gl_FragColor = texture2D(texture, texCoord) * f_color;
I'll leave it as an exercise for you as to how to move the translation from the fourth column into the third, and how to pass it as a 3x3 matrix. Of course, in that case, you'll do vec3(gl_PointCoord, 1) for the matrix multiply.
I was stuck in the same problem too, but I found a tutorial that explain how to perform a 2d texture rotation in the same fragment shader with only with passing the rotate value (vRotation).
#version 130
uniform sampler2D tex;
varying float vRotation;
void main(void)
{
float mid = 0.5;
vec2 rotated = vec2(cos(vRotation) * (gl_PointCoord.x - mid) + sin(vRotation) * (gl_PointCoord.y - mid) + mid,
cos(vRotation) * (gl_PointCoord.y - mid) - sin(vRotation) * (gl_PointCoord.x - mid) + mid);
vec4 rotatedTexture=texture2D(tex, rotated);
gl_FragColor = gl_Color * rotatedTexture;
}
Maybe this method is slow but is only to prove/show that you have an alternative to perform a texture 2D rotation inside fragment shader instead of passing a Matrix.
Note: vRotation should be in Radians.
Cheers,
You're right - a 2x2 rotation matrix will do what you want.
This page: http://www.cg.info.hiroshima-cu.ac.jp/~miyazaki/knowledge/teche31.html shows how to compute the elements. Note that you will be rotating the texture coordinates, not the vertex positions - the result will probably not be what you're expecting - it will rotate around the 0,0 texture coordinate, for example.
You may alse need to multiply the point_size by 2 and shrink the gl_PointCoord by 2 to ensure the whole texture fits into the point sprite when it's rotated. But do that as a second change. Note that a straight scale of texture coordinates move them towards the texture coordinate origin, not the middle of the sprite.
If you use a higher dimension matrix (3x3) then you will be able to combine the offset, scale and rotation into one operation.