As a test, I created a simple quad. Here are its attributes:
Vertex vertices[] =
{
// Positions Normals
{vec3(-1,-1, 0), vec3(-1,-1, 1)}, // v0
{vec3( 1,-1, 0), vec3( 1,-1, 1)}, // v1
{vec3(-1, 1, 0), vec3(-1, 1, 1)}, // v2
{vec3( 1, 1, 0), vec3( 1, 1, 1)}, // v3
};
And I put it in my world space at (0.0, 0.0, -9.5). Then I put my point light position at (0.0, 0.0, -8.0). My camera is at the origin (0.0, 0.0, 0.0). When I run my program, this works as expected:
But then, when I replace this quad with 9 scaled down quads, put them all at -9.5 on Z (in other word, they are all parallel to each other on Z), my diffuse lighting gets a little weird
It looks like the corners are showing too much lighting, breaking the nice diffuse circle that we see on a regular quad.
Here is my shader program:
precision mediump int;
precision mediump float;
varying vec3 v_position;
varying vec3 v_normal;
#if defined(VERTEX)
uniform mat4 u_mvpMatrix;
uniform mat4 u_mvMatrix;
uniform mat3 u_normalMatrix;
attribute vec4 a_position;
attribute vec3 a_normal;
void main()
{
vec4 position = u_mvMatrix * a_position;
v_position = position.xyz / position.w;
v_normal = normalize(u_normalMatrix * a_normal);
gl_Position = u_mvpMatrix * a_position;
}
#endif // VERTEX
#if defined(FRAGMENT)
uniform vec3 u_pointLightPosition;
void main()"
{
vec3 viewDir = normalize(-v_position);
vec3 normal = normalize(v_normal);
vec3 lightPosition = u_pointLightPosition - v_position;
vec3 pointLightDir = normalize(lightPosition);
float distance = length(lightPosition);
float pointLightAttenuation = 1.0 / (1.0 + (0.25 * distance * distance));
float diffuseTerm = max(dot(pointLightDir, normal), 0.15);
gl_FragColor = vec4(diffuseTerm * pointLightAttenuation);
}
#endif // FRAGMENT
My uniforms are uploaded as followed (I'm using GLM):
const mat4 &view_matrix = getViewMatrix();
mat4 mv_matrix = view * getModelMatrix();
mat4 mvp_matrix = getProjectionMatrix() * mv_matrix;
mat3 normal_matrix = inverseTranspose(mat3(mv_matrix));
vec3 pointLightPos = vec3(view_matrix * vec4(getPointLightPos(), 1.0f));
glUniformMatrix4fv( mvpMatrixUniformID, 1, GL_FALSE, (GLfloat*)&mvp_matrix);
glUniformMatrix4fv( vpMatrixUniformID, 1, GL_FALSE, (GLfloat*)&mv_matrix);
glUniformMatrix3fv(normalMatrixUniformID, 1, GL_FALSE, (GLfloat*)&normal_matrix);
glUniform3f(pointLightPosUniformID, pointLightPos.x, pointLightPos.y, pointLightPos.z);
Am I doing anything wrong?
Thanks!
Without going too much into your code, I think everything is working just fine. I see a very similar result with a quick blender setup:
The issue is the interpolation of the normal doesn't produce a spherical bump.
It ends up being a patch like this (I simply subdivided a smooth shaded cube)...
If you want a more spherical bump, you could generate the normals implicitly in a fragment shader (for example as is done here (bottom image)), use a normal map, or use more tessellated geometry such as an actual sphere.
Related
I'm currently working on an OpenGL project and I'm trying to get shadow mapping to work properly. I could get to a point where the shadow map gets rendered into a texture, but it doesn't seem to get applied to the scenery when rendered. Here's the most important bits of my code:
The shadow map vertex shader, basically a simple pass through shader (also does some additional stuff like normals, but that shouldn't distract you); it basically just transforms the vertices so they're seen from the perspective of the light (it's a directional light but since we need to assume a position, it's basically a point far away):
#version 430 core
layout(location = 0) in vec3 v_position;
layout(location = 1) in vec3 v_normal;
layout(location = 2) in vec3 v_texture;
layout(location = 3) in vec4 v_color;
out vec3 f_texture;
out vec3 f_normal;
out vec4 f_color;
uniform mat4 modelMatrix;
uniform mat4 depthViewMatrix;
uniform mat4 depthProjectionMatrix;
// Shadow map vertex shader.
void main() {
mat4 mvp = depthProjectionMatrix * depthViewMatrix * modelMatrix;
gl_Position = mvp * vec4(v_position, 1.0);
// Passing attributes on to the fragment shader
f_texture = v_texture;
f_normal = (transpose(inverse(modelMatrix)) * vec4(v_normal, 1.0)).xyz;
f_color = v_color;
}
The shadow map fragment shader that writes the depth value to a texture:
#version 430 core
layout(location = 0) out float fragmentDepth;
in vec3 f_texture;
in vec3 f_normal;
in vec4 f_color;
uniform vec3 lightDirection;
uniform sampler2DArray texSampler;
// Shadow map fragment shader.
void main() {
fragmentDepth = gl_FragCoord.z;
}
The vertex shader that actually renders the scene, but also calculates the position of the current vertex from the lights point of view (shadowCoord) to compare against the depth texture; it also applies a bias matrix, since the coordinates aren't in the correct [0, 1] interval for sampling:
#version 430 core
layout(location = 0) in vec3 v_position;
layout(location = 1) in vec3 v_normal;
layout(location = 2) in vec3 v_texture;
layout(location = 3) in vec4 v_color;
out vec3 f_texture;
out vec3 f_normal;
out vec4 f_color;
out vec3 f_shadowCoord;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform mat4 depthViewMatrix;
uniform mat4 depthProjectionMatrix;
// Simple vertex shader.
void main() {
mat4 mvp = projectionMatrix * viewMatrix * modelMatrix;
gl_Position = mvp * vec4(v_position, 1.0);
// This bias matrix adjusts the projection of a given vertex on a texture to be within 0 and 1 for proper sampling
mat4 depthBias = mat4(0.5, 0.0, 0.0, 0.5,
0.0, 0.5, 0.0, 0.5,
0.0, 0.0, 0.5, 0.5,
0.0, 0.0, 0.0, 1.0);
mat4 depthMVP = depthProjectionMatrix * depthViewMatrix * modelMatrix;
mat4 biasedDMVP = depthBias * depthMVP;
// Passing attributes on to the fragment shader
f_shadowCoord = (biasedDMVP * vec4(v_position, 1.0)).xyz;
f_texture = v_texture;
f_normal = (transpose(inverse(modelMatrix)) * vec4(v_normal, 1.0)).xyz;
f_color = v_color;
}
The fragment shader that applies textures from a texture array and receives the depth texture (uniform sampler2D shadowMap) and checks if a fragment is behind something:
#version 430 core
in vec3 f_texture;
in vec3 f_normal;
in vec4 f_color;
in vec3 f_shadowCoord;
out vec4 color;
uniform vec3 lightDirection;
uniform sampler2D shadowMap;
uniform sampler2DArray tileTextureArray;
// Very basic fragment shader.
void main() {
float visibility = 1.0;
if (texture(shadowMap, f_shadowCoord.xy).z < f_shadowCoord.z) {
visibility = 0.5;
}
color = texture(tileTextureArray, f_texture) * visibility;
}
And finally: the function that renders multiple chunks to generate the shadow map and then renders the scene with the shadow map applied:
// Generating the shadow map
glBindFramebuffer(GL_FRAMEBUFFER, m_framebuffer);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_depthTexture);
m_shadowShader->bind();
glViewport(0, 0, 1024, 1024);
glDisable(GL_CULL_FACE);
glm::vec3 lightDir = glm::vec3(1.0f, -0.5f, 1.0f);
glm::vec3 sunPosition = FPSCamera::getPosition() - lightDir * 64.0f;
glm::mat4 depthViewMatrix = glm::lookAt(sunPosition, FPSCamera::getPosition(), glm::vec3(0, 1, 0));
glm::mat4 depthProjectionMatrix = glm::ortho<float>(-100.0f, 100.0f, -100.0f, 100.0f, 0.1f, 800.0f);
m_shadowShader->setUniformMatrix("depthViewMatrix", depthViewMatrix);
m_shadowShader->setUniformMatrix("depthProjectionMatrix", depthProjectionMatrix);
for (Chunk *chunk : m_chunks) {
m_shadowShader->setUniformMatrix("modelMatrix", chunk->getModelMatrix());
chunk->drawElements();
}
m_shadowShader->unbind();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Normal draw call
m_chunkShader->bind();
glEnable(GL_CULL_FACE);
glViewport(0, 0, Window::getWidth(), Window::getHeight());
glm::mat4 viewMatrix = FPSCamera::getViewMatrix();
glm::mat4 projectionMatrix = FPSCamera::getProjectionMatrix();
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_depthTexture);
glActiveTexture(GL_TEXTURE1);
m_textures->bind();
m_chunkShader->setUniformMatrix("depthViewMatrix", depthViewMatrix);
m_chunkShader->setUniformMatrix("depthProjectionMatrix", depthProjectionMatrix);
m_chunkShader->setUniformMatrix("viewMatrix", viewMatrix);
m_chunkShader->setUniformMatrix("projectionMatrix", projectionMatrix);
m_chunkShader->setUniformVec3("lightDirection", lightDir);
m_chunkShader->setUniformInteger("shadowMap", 0);
m_chunkShader->setUniformInteger("tileTextureArray", 1);
for (Chunk *chunk : m_chunks) {
m_chunkShader->setUniformMatrix("modelMatrix", chunk->getModelMatrix());
chunk->drawElements();
}
Most of the code should be self-explanatory, I'm binding a FBO with a texture attached, we do a normal rendering call into the framebuffer, it gets rendered into a texture and then I'm trying to pass it into the shader for normal rendering. I've tested whether the texture gets properly generated and it does: See the generated shadow map here
However, when rendering the scene, all I see is this.
No shadows applied, visibility is 1.0 everywhere. I also use a debug context which works properly and logs errors when there are any, but it seems to be completely fine, no warnings or errors, so I'm the one doing something terribly wrong here. I'm on OpenGL 4.3 by the way.
Hopefully one of you can help me out on this, I've never got shadow maps to work before, this is the closest I've ever come, lol. Thanks in advance.
Commonly a mat4 OpenGL transformation matrix looks like this:
( X-axis.x, X-axis.y, X-axis.z, 0 )
( Y-axis.x, Y-axis.y, Y-axis.z, 0 )
( Z-axis.x, Z-axis.y, Z-axis.z, 0 )
( trans.x, trans.y, trans.z, 1 )
So your depthBias matrix, which you use to convert from normalized device coordinates (in ranage [-1, 1]) to texture coordinates (in range [0, 1]), should look like this:
mat4 depthBias = mat4(0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0);
or this:
mat4 depthBias = mat4(
vec4( 0.5, 0.0, 0.0, 0.0 ),
vec4( 0.0, 0.5, 0.0, 0.0 ),
vec4( 0.0, 0.0, 0.5, 0.0 ),
vec4( 0.5, 0.5, 0.5, 1.0 ) );
After you have transformed a vertex position by the model matrix, the view matrix and the projection matrix, the vertex position is in clip space (homogeneous coordinates). You have to convert from clip space to normalized device coordinates (cartesian coordinates in range [-1, 1]). This can be done by dividing, by the w component of the homogeneous coordinate:
mat4 depthMVP = depthProjectionMatrix * depthViewMatrix * modelMatrix;
vec4 clipPos = depthMVP * vec4(v_position, 1.0);
vec4 ndcPos = vec4(clipPos.xyz / clipPos.w, 1.0);
f_shadowCoord = (depthBias * ndcPos).xyz;
A depth texture has one channel only. If you read data from the depth texture, then the data is contained in the x (or r) component of the vector.
Adapt the fragment shader code like this:
if ( texture(shadowMap, f_shadowCoord.xy).x < f_shadowCoord.z)
visibility = 0.5;
The Image Format specification of Khronos group says:
Image formats do not have to store each component. When the shader
samples such a texture, it will still resolve to a 4-value RGBA
vector. The components not stored by the image format are filled in
automatically. Zeros are used if R, G, or B is missing, while a
missing Alpha always resolves to 1.
see further:
Data Type (GLSL)
GLSL Programming/Vector and Matrix Operations
Transform the modelMatrix
How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?
OpenGL Shadow map problems
Addition to the solution:
This is an important part of the solution, but there was another step needed to properly render the shadow map. The second mistake was using the wrong component of the texture to compare against f_shadowCoord.z: it should've been
texture(shadowMap, f_shadowCoord.xy).r
instead of
texture(shadowMap, f_shadowCoord.xy).z
I'm a new opengl programmer. I am plotting a height map composed of thousands of triangles in a 3D graph format. They are scaled so that they are plotted between -1 and +1 in the three axis. Now I am able to zoom in the X axis only and am able to translate in the X axis as well by applying the appropriate scale and translation matrices. This effectively allows me to zoom right into the data and move it in the x direction as I choose.
The problem is, once I zoom, the data in the x direction now extends outside the -1 to + 1 region which the boundaries of a graph. I want this data to not be shown.
How is this done in modern OpenGL?
Thank you
Edit:
The matrices are as follows:
plottingProgram["projection_matrix"].SetValue(Matrix4.CreatePerspectiveFieldOfView(0.45f, (float)width / height, 0.1f, 1000f));
plottingProgram["view_matrix"].SetValue(Matrix4.LookAt(new Vector3(0, 0, 10), Vector3.Zero, new Vector3(0, 1, 0)));
and the vertex shader is
public static string VertexShader = #"
#version 130
in vec3 vertexPosition;
out vec2 textureCoordinate;
uniform mat4 projection_matrix;
uniform mat4 view_matrix;
uniform mat4 model_matrix;
void main(void)
{
textureCoordinate = vertexPosition.xy;
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(vertexPosition, 1);
}
";
Here is a link to the graph:
http://va2fsq.com/wp-content/uploads/graph.jpg
Thanks
I solved this. After trying many things like glScissor, I happened upon the glClip_distance. So my initial try placed a Uniform in the shader which was set to
in vec3 vertexPosition;
uniform vec4 plane0 = (-1,0,0,1);
uniform vec4 plane1 = (1,0,0,1);
void main(void)
{
gl_ClipDistance[0] = dot(vec4(vertexPosition,1), plane0;
gl_ClipDistance[1] = dot(vec4(vertexPosition,1), plane2;
Now the problem with this is that the vec4 planes are scaled and translated by any model matrix scaling or transformations. So that wouldn't work. So the solution is to move the clip vectors outside of the vertex shader and apply the opposite scaling to them as follows:
Vector4 clip1 = new Vector4(-1, 0, 0, 1.0f / scale);
Vector4 clip2 = new Vector4(1, 0, 0, 1.0f / scale);
plottingProgram["model_matrix"].SetValue(Matrix4.CreateScaling(new Vector3(scale,1,1)) * Matrix4.CreateTranslation(new Vector3(0,0,0)) *Matrix4.CreateRotationY(yangle) * Matrix4.CreateRotationX(xangle));
plottingProgram["plane0"].SetValue(clip1);
plottingProgram["plane1"].SetValue(clip2);
and the complete vertex shader is given by
public static string VertexShader = #"
#version 130
in vec3 vertexPosition;
out vec2 textureCoordinate;
uniform mat4 projection_matrix;
uniform mat4 view_matrix;
uniform mat4 model_matrix;
uniform vec4 plane0;
uniform vec4 plane1;
void main(void)
{
textureCoordinate = vertexPosition.xy;
gl_ClipDistance[0] = dot(vec4(vertexPosition,1), plane0);
gl_ClipDistance[1] = dot(vec4(vertexPosition,1), plane1);
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(vertexPosition, 1);
}
";
You can also translate in the same way.
I met some problems when implementing diffuse light.
correct result may look like the picture on left diffuse light picture
The right side is incorrect result.
The problems are below
1.There is no lighting effect in the beginning.I have to rotate the object to some degree the lighting effect will appear.
2.The object mixed some triangle in it. (I have checked my read file(.obj &.mtl) part already. It's correct.)
3.I only turn on the diffuse light. But the lighting effect seems to ambient light.
My light source position is (0, 0, 5), eye position is(0, 0, 0);
my vertex shader
attribute vec4 vertexPosition;
attribute vec3 vertexNormal_objectSpace;
varying vec4 vv4color;
uniform mat4 mvp; //model, viewing, projection transformation matrix
uniform mat4 NormalMatrix;
struct LightSourceParameters
{
vec4 ambient;
vec4 diffuse;
vec4 specular;
vec4 position;
vec4 halfVector;
vec3 spotDirection;
float spotExponent;
float spotCutoff; // (range: [0.0,90.0], 180.0)
float spotCosCutoff; // (range: [1.0,0.0],-1.0)
float constantAttenuation;
float linearAttenuation;
float quadraticAttenuation;
};
struct MaterialParameters
{
vec4 ambient;
vec4 diffuse;
vec4 specular;
float shininess;
};
uniform MaterialParameters Material;
uniform LightSourceParameters LightSource[3]; //because I have to implement three light sources(directional, point, specular light)
void main()
{
vec3 normal, TransformedNormal, lightDirection;
vec4 ambient, diffuse;
float NdotL;
ambient = LightSource[0].ambient * Material.ambient;
TransformedNormal = vec3(vec4(vertexNormal_objectSpace, 0.0) * NormalMatrix);
normal = normalize(TransformedNormal);
lightDirection = normalize(vec3(LightSource[0].position));
NdotL = max(dot(normal, lightDirection), 0.0);
diffuse = LightSource[0].diffuse * Material.diffuse * NdotL;
vv4color = ambient + diffuse; //vv4color pass to fragment shader
gl_Position = mvp * vertexPosition;
}
my display function
void onDisplay(void)
{
Matrix4 MVP, modelView, NormalMatrix;
int i=0;
// clear canvas
glClearColor(0.5f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableVertexAttribArray(iLocPosition);
glEnableVertexAttribArray(iLocNormal);
geo_rotate = geo_rotate_x * geo_rotate_y * geo_rotate_z;
Geo = geo_rotate * geo_scale * geo_trans;
modelView = View * Geo;
modelView = modelView.transpose(); //row-major -> column-major
modelView = modelView.invert(); //normal transformation(transpose after inverse)
NormalMatrix = modelView.transpose();
MVP = Proj * View * Geo * Norm;
MVP = MVP.transpose();
glUniformMatrix3fv(iLocEyePosition, 1, GL_FALSE, &eye[0]);
glUniformMatrix4fv(iLocNormalMatrix, 1, GL_FALSE, &NormalMatrix[0]); //bind uniform matrix to shader
glUniformMatrix4fv(iLocMVP, 1, GL_FALSE, &MVP[0]);
group = OBJ->groups;
for(i=0; i<OBJ->numgroups-1; i++)
{
//pass model material value to the shader
glUniform4fv(iLocMAmbient, 1, material[i].ambient);
glUniform4fv(iLocMDiffuse, 1, material[i].diffuse);
glUniform4fv(iLocMSpecular, 1, material[i].specular);
glUniform1f(iLocMShininess, material[i].shininess);
glVertexAttribPointer(iLocPosition, 3, GL_FLOAT, GL_FALSE, 0, V[i]); //bind array pointers to shader
glVertexAttribPointer(iLocNormal, 3, GL_FLOAT, GL_FALSE, 0, N[i]);
glDrawArrays(GL_TRIANGLES, 0, group->numtriangles*3); //draw the array we just bound
group = group->next;
}
glutSwapBuffers();
}
Thank you all of you.
Why does my light move with my camera? in my draw scene function I set my light source position, then I call my matrix, translate the "camera", then a sphere, and after two cubes. When I move the camera around along with the first cube, the light source moves with it...
function drawScene() {
gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
mat4.perspective(45, gl.viewportWidth / gl.viewportHeight, 0.1, 100.0, pMatrix);
//currentProgram = perFragmentProgram;
currentProgram = perVertexProgram;
gl.useProgram(currentProgram);
gl.uniform3f(
currentProgram.ambientColorUniform,
parseFloat(document.getElementById("ambientR").value),
parseFloat(document.getElementById("ambientG").value),
parseFloat(document.getElementById("ambientB").value)
);
gl.uniform3f(
currentProgram.pointLightingLocationUniform,
parseFloat(document.getElementById("lightPositionX").value),
parseFloat(document.getElementById("lightPositionY").value),
parseFloat(document.getElementById("lightPositionZ").value)
);
gl.uniform3f(
currentProgram.pointLightingColorUniform,
parseFloat(document.getElementById("pointR").value),
parseFloat(document.getElementById("pointG").value),
parseFloat(document.getElementById("pointB").value)
);
mat4.identity(mvMatrix);
//Camera
mat4.translate(mvMatrix, [-xPos, -yPos, -10]);
mat4.rotate(mvMatrix, degToRad(180), [0, 1, 0]);
//Sphere
mvPushMatrix();
mat4.rotate(mvMatrix, degToRad(moonAngle), [0, 1, 0]);
mat4.translate(mvMatrix, [5, 0, 0]);
gl.bindBuffer(gl.ARRAY_BUFFER, moonVertexPositionBuffer);
gl.vertexAttribPointer(currentProgram.vertexPositionAttribute, moonVertexPositionBuffer.itemSize, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, moonVertexNormalBuffer);
gl.vertexAttribPointer(currentProgram.vertexNormalAttribute, moonVertexNormalBuffer.itemSize, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, moonVertexIndexBuffer);
setMatrixUniforms();
gl.drawElements(gl.TRIANGLES, moonVertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0);
mvPopMatrix();
//Cube 1
object.render(xPos, yPos);
//Cube 2
object2.render(0, 5);
}
And my shaders look like this.
<script id="per-vertex-lighting-fs" type="x-shader/x-fragment">
precision mediump float;
varying vec3 vLightWeighting;
void main(void) {
vec4 fragmentColor;
fragmentColor = vec4(1.0, 1.0, 1.0, 1.0);
gl_FragColor = vec4(fragmentColor.rgb * vLightWeighting, fragmentColor.a);
}
</script>
<script id="per-vertex-lighting-vs" type="x-shader/x-vertex">
attribute vec3 aVertexPosition;
attribute vec3 aVertexNormal;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
uniform mat3 uNMatrix;
uniform vec3 uAmbientColor;
uniform vec3 uPointLightingLocation;
uniform vec3 uPointLightingColor;
uniform bool uUseLighting;
varying vec2 vTextureCoord;
varying vec3 vLightWeighting;
void main(void) {
vec4 mvPosition = uMVMatrix * vec4(aVertexPosition, 1.0);
gl_Position = uPMatrix * mvPosition;
vec3 lightDirection = normalize(uPointLightingLocation - mvPosition.xyz);
vec3 transformedNormal = uNMatrix * aVertexNormal;
float directionalLightWeighting = max(dot(transformedNormal, lightDirection), 0.0);
vLightWeighting = uAmbientColor + uPointLightingColor * directionalLightWeighting;
}
</script>
What can I do to stop the light from being moved around so its static
uPointLightingLocation must be in eye space, matching transformedNormal which you're comparing it to with the dot product.
Multiply lightPosition (assuming it's in world space) by the view/camera matrix. It'll be cheaper to do this outside the shader as the value does not change during the render.
The view matrix already exists in your code mid-way through the model-view construction.
The //Camera block is the view and the //Sphere block multiplies in the model transform. To extract just the view, copy mvMatrix between your Camera and Sphere transform blocks (or just transform the light then and there).
//untested, but something along these lines
var worldSpaceLight = vec4.fromValues( //not sure which lib your using
parseFloat(document.getElementById("lightPositionX").value),
parseFloat(document.getElementById("lightPositionY").value),
parseFloat(document.getElementById("lightPositionZ").value),
1.0
);
...
//Camera
...
var eyeSpaceLight = vec4.create();
vec4.transformMat4(eyeSpaceLight, worldSpaceLight, mvMatrix);
gl.uniform3f(currentProgram.pointLightingLocationUniform, eyeSpaceLight);
//Sphere
...
I've been trying to implement Morph Target animation in OpenGL with Facial Blendshapes but following this tutorial. The vertex shader for the animation looks something like this:
#version 400 core
in vec3 vNeutral;
in vec3 vSmile_L;
in vec3 nNeutral;
in vec3 nSmile_L;
in vec3 vSmile_R;
in vec3 nSmile_R;
uniform float left;
uniform float right;
uniform float top;
uniform float bottom;
uniform float near;
uniform float far;
uniform vec3 cameraPosition;
uniform vec3 lookAtPosition;
uniform vec3 upVector;
uniform vec4 lightPosition;
out vec3 lPos;
out vec3 vPos;
out vec3 vNorm;
uniform vec3 pos;
uniform vec3 size;
uniform mat4 quaternion;
uniform float smile_w;
void main(){
//float smile_l_w = 0.9;
float neutral_w = 1 - 2 * smile_w;
clamp(neutral_w, 0.0, 1.0);
vec3 vPosition = neutral_w * vNeutral + smile_w * vSmile_L + smile_w * vSmile_R;
vec3 vNormal = neutral_w * nNeutral + smile_w * nSmile_L + smile_w * nSmile_R;
//vec3 vPosition = vNeutral + (vSmile_L - vNeutral) * smile_w;
//vec3 vNormal = nNeutral + (nSmile_L - nNeutral) * smile_w;
normalize(vPosition);
normalize(vNormal);
mat4 translate = mat4(1.0, 0.0, 0.0, 0.0,
0.0, 1.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
pos.x, pos.y, pos.z, 1.0);
mat4 scale = mat4(size.x, 0.0, 0.0, 0.0,
0.0, size.y, 0.0, 0.0,
0.0, 0.0, size.z, 0.0,
0.0, 0.0, 0.0, 1.0);
mat4 model = translate * scale * quaternion;
vec3 n = normalize(cameraPosition - lookAtPosition);
vec3 u = normalize(cross(upVector, n));
vec3 v = cross(n, u);
mat4 view=mat4(u.x,v.x,n.x,0,
u.y,v.y,n.y,0,
u.z,v.z,n.z,0,
dot(-u,cameraPosition),dot(-v,cameraPosition),dot(-n,cameraPosition),1);
mat4 modelView = view * model;
float p11=((2.0*near)/(right-left));
float p31=((right+left)/(right-left));
float p22=((2.0*near)/(top-bottom));
float p32=((top+bottom)/(top-bottom));
float p33=-((far+near)/(far-near));
float p43=-((2.0*far*near)/(far-near));
mat4 projection = mat4(p11, 0, 0, 0,
0, p22, 0, 0,
p31, p32, p33, -1,
0, 0, p43, 0);
//lighting calculation
vec4 vertexInEye = modelView * vec4(vPosition, 1.0);
vec4 lightInEye = view * lightPosition;
vec4 normalInEye = normalize(modelView * vec4(vNormal, 0.0));
lPos = lightInEye.xyz;
vPos = vertexInEye.xyz;
vNorm = normalInEye.xyz;
gl_Position = projection * modelView * vec4(vPosition, 1.0);
}
Although the algorithm for morph target animation works, I get missing faces on the final calculated blend shape. The animation somewhat looks like the follow gif.
The blendshapes are exported from a markerless facial animation software known as FaceShift.
But also, the algorithm works perfectly on a normal cuboid with it's twisted blend shape created in Blender:
Could it something wrong with the blendshapes I am using for the facial animation? Or I am doing something wrong in the vertex shader?
--------------------------------------------------------------Update----------------------------------------------------------
So as suggested, I made the changes required to the vertex shader, and made a new animation, and still I am getting the same results.
Here's the updated vertex shader code:
#version 400 core
in vec3 vNeutral;
in vec3 vSmile_L;
in vec3 nNeutral;
in vec3 nSmile_L;
in vec3 vSmile_R;
in vec3 nSmile_R;
uniform float left;
uniform float right;
uniform float top;
uniform float bottom;
uniform float near;
uniform float far;
uniform vec3 cameraPosition;
uniform vec3 lookAtPosition;
uniform vec3 upVector;
uniform vec4 lightPosition;
out vec3 lPos;
out vec3 vPos;
out vec3 vNorm;
uniform vec3 pos;
uniform vec3 size;
uniform mat4 quaternion;
uniform float smile_w;
void main(){
float neutral_w = 1.0 - smile_w;
float neutral_f = clamp(neutral_w, 0.0, 1.0);
vec3 vPosition = neutral_f * vNeutral + smile_w/2 * vSmile_L + smile_w/2 * vSmile_R;
vec3 vNormal = neutral_f * nNeutral + smile_w/2 * nSmile_L + smile_w/2 * nSmile_R;
mat4 translate = mat4(1.0, 0.0, 0.0, 0.0,
0.0, 1.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
pos.x, pos.y, pos.z, 1.0);
mat4 scale = mat4(size.x, 0.0, 0.0, 0.0,
0.0, size.y, 0.0, 0.0,
0.0, 0.0, size.z, 0.0,
0.0, 0.0, 0.0, 1.0);
mat4 model = translate * scale * quaternion;
vec3 n = normalize(cameraPosition - lookAtPosition);
vec3 u = normalize(cross(upVector, n));
vec3 v = cross(n, u);
mat4 view=mat4(u.x,v.x,n.x,0,
u.y,v.y,n.y,0,
u.z,v.z,n.z,0,
dot(-u,cameraPosition),dot(-v,cameraPosition),dot(-n,cameraPosition),1);
mat4 modelView = view * model;
float p11=((2.0*near)/(right-left));
float p31=((right+left)/(right-left));
float p22=((2.0*near)/(top-bottom));
float p32=((top+bottom)/(top-bottom));
float p33=-((far+near)/(far-near));
float p43=-((2.0*far*near)/(far-near));
mat4 projection = mat4(p11, 0, 0, 0,
0, p22, 0, 0,
p31, p32, p33, -1,
0, 0, p43, 0);
//lighting calculation
vec4 vertexInEye = modelView * vec4(vPosition, 1.0);
vec4 lightInEye = view * lightPosition;
vec4 normalInEye = normalize(modelView * vec4(vNormal, 0.0));
lPos = lightInEye.xyz;
vPos = vertexInEye.xyz;
vNorm = normalInEye.xyz;
gl_Position = projection * modelView * vec4(vPosition, 1.0);
}
Also, my fragment shader looks something like this. (I just added new material settings as compared to earlier)
#version 400 core
uniform vec4 lightColor;
uniform vec4 diffuseColor;
in vec3 lPos;
in vec3 vPos;
in vec3 vNorm;
void main(){
//copper like material light settings
vec4 ambient = vec4(0.19125, 0.0735, 0.0225, 1.0);
vec4 diff = vec4(0.7038, 0.27048, 0.0828, 1.0);
vec4 spec = vec4(0.256777, 0.137622, 0.086014, 1.0);
vec3 L = normalize (lPos - vPos);
vec3 N = normalize (vNorm);
vec3 Emissive = normalize(-vPos);
vec3 R = reflect(-L, N);
float dotProd = max(dot(R, Emissive), 0.0);
vec4 specColor = lightColor*spec*pow(dotProd,0.1 * 128);
vec4 diffuse = lightColor * diff * (dot(N, L));
gl_FragColor = ambient + diffuse + specColor;
}
And finally the animation I got from updating the code:
As you can see, I am still getting some missing triangles/faces in the morph target animation. Any more suggestions/comments regarding the issue would be really helpful. Thanks again in advance. :)
Update:
So as suggested, I flipped the normals if dot(vSmile_R, nSmile_R) < 0 and I got the following image result.
Also, instead of getting the normals from the obj files, I tried calculating my own (face and vertex normals) and still I got the same result.
Not an answer attempt, I just need more formatting than available for comments.
I cannot tell which data was actually exported from Fasceshift and how that was put into the custom ADTs of the app; my crystal ball is currently busy with predicting the FIFA Wold Cup results.
But generally, a linear morph is a very simple thing:
There is one vector "I" of data for the initial mesh and a vector "F" of equal size for the position data of the final mesh; their count and ordering must match for the tessellation to remain intact.
Given j ∈ [0,count), corresponding vectors initial_ = I[j], final_ = F[j] and a morph factor λ ∈ [0,1] the j-th (zero-based) current vector current_(λ) is given by
current_(λ) = initial_ + λ . (final_ - initial_) = (1 - λ ) . initial_ + λ . final_.
From this perspective, this
vec3 vPosition = neutral_w * vNeutral +
smile_w/2 * vSmile_L + smile_w/2 * vSmile_R;
looks dubious at best.
As I said, my crystal ball is currently defunct, but the naming would imply that, given the OpenGL standard reference frame,
vSmile_L = vSmile_R * (-1,1,1),
this "*" denoting component-wise multiplication, and that in turn would imply cancelling out the morph x-component by above addition.
But apparently, the face does not degenerate into a plane (a line from the projected pov), so the meaning of those attributes is unclear.
That's the reason why I want to look at the effective data, as stated in the comments.
Another thing, not related to the effect in question, but to the the shading algorithm.
As stated in the answer to this
Can OpenGL shader compilers optimize expressions on uniforms?,
the shader optimizer could well optimize pure uniform expressions like the M/V/P calculations done with
uniform float left;
uniform float right;
uniform float top;
uniform float bottom;
uniform float near;
uniform float far;
uniform vec3 cameraPosition;
uniform vec3 lookAtPosition;
uniform vec3 upVector;
/* */
uniform vec3 pos;
uniform vec3 size;
uniform mat4 quaternion;
but I find it highly optimistic to rely on such assumed optimizations.
if it is not optimized accordingly doing this means doing it once per frame per vertex so for a human face with a LOD of 1000 vertices, and 60Hz that would be done 60,000 times per second by the GPU, instead of once and for all by the CPU.
No modern CPU would give up soul if these calculations are put once on her shoulders, so passing the common trinity of M/V/P matrices as uniforms seems appropriate instead of constructing those matrices in the shader.
For reusing the code from the shaders - glm provides a very glsl-ish way to do GL-related maths in C++.
I had a very similar problem some time ago. As you eventually noticed, your problem most probably lies in the mesh itself. In my case, it was inconsistent mesh triangulation. Using the Triangluate Modifier in Blender solved the problem for me. Perhaps you should give it a try too.