How do I transform a vertex for skeletal animation? - c++

I am developing a new video game and I've been blocked for about 5 weeks on skeletal animation. I believe I've narrowed down the problem, but can't figure out what I'm actually doing wrong.
I have a simple 12-vertex rectangular object with four bones inside. This image shows what the object looks like in its bind pose, and what the object should like with the top bone rotated ~90 degrees about the Y-axis. To test bone weights in my application, this is the simple example I'm using. In my application, I programatically turn the top bone ~90 degrees and have the shader render it.
Unfortunately, my application does not produce the same result. The bind pose displays properly, but when the top bone transform is applied, the transform is exaggerated and the top part of the rectangle simply stretches in the direction I rotate the top bone.
I have verified the following:
Bones are sent to the shader uniform as relative transforms. This means that when i rotate the top bone by 90, bones 1-3 are all identity, and bone 4 is a matrix that only rotates ~90 degrees along the Y-axis.
Bones are weighted properly for any given vertex (or at least, they are weighted identically in my application to what Blender has reported them as).
So I've reduced my problem to this single sanity check. Referring to the first screenshot above, I've chosen one vertex to transform using my bone method. One little vertex: -0.5, -0.5, 4.0. Assuming I apply everything properly, the bones should transform this vertex to -0.95638, -0.5, 2.63086. To make debugging easier, I've taken my vertex shader...
#version 330 core
layout (location = 0) in vec3 position; // The position variable has attribute position 0
layout (location = 1) in vec3 normal; // This is currently unused
layout (location = 2) in vec2 texture;
layout (location = 3) in ivec4 boneIDs;
layout (location = 4) in vec4 boneWeights;
out vec2 fragTexture;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
uniform mat4 bones[ 16 ];
void main()
{
mat4 boneTransform =
( bones[ boneIDs[ 0 ] ] * boneWeights[ 0 ] ) +
( bones[ boneIDs[ 1 ] ] * boneWeights[ 1 ] ) +
( bones[ boneIDs[ 2 ] ] * boneWeights[ 2 ] ) +
( bones[ boneIDs[ 3 ] ] * boneWeights[ 3 ] );
mat4 mvp = projection * view * model;
gl_Position = mvp * boneTransform * vec4( position, 1.0f );
fragTexture = texture;
}
...and put it into this simple unit test-style function below, made just to transform my test vertex.
glm::mat4 id( 1.0f ); // ID 0
glm::mat4 bone( 1.0f ); // ID 1
glm::mat4 bone002( 1.0f ); // ID 2
glm::mat4 bone003( 1.0f ); // ID 3
// Keyframe is set to rotate bone003 -89.113 degrees along Y
bone003 *= glm::toMat4( glm::angleAxis( (float)glm::radians( -89.113 ), glm::vec3( 0.0f, 1.0f, 0.0f ) ) );
glm::mat4 xform =
( bone002 * 0.087f ) +
( bone003 * 0.911f ) +
( id * 0 ) +
( id * 0 );
glm::vec4 point = xform * glm::vec4( glm::vec3( -0.5f, -0.5f, 4.0f ), 1.0f );
This code simulates the state of my vertex shader, where the four mat4s above are bones[0] through bones[3]. bone003 is what is sent to my shader after transforming Bone.003 and removing its inverse bind. Despite being exactly in line with my current understanding of skeletal animation, and matching all relevant weights/values from Blender, the vertex (-0.5, -0.5, 4.0) is transformed to the nonsense value of (-3.694115, -0.499000, -0.051035). The math is right, the values match up, but the answer is all wrong.
So, here is where I come to my actual question: What am I doing wrong in transforming my mesh vertices by influence of bone transforms? Where is my understanding of skeletal animation incorrect here?

This seems wrong to me:
mat4 boneTransform =
( bones[ boneIDs[ 0 ] ] * boneWeights[ 0 ] ) +
( bones[ boneIDs[ 1 ] ] * boneWeights[ 1 ] ) +
( bones[ boneIDs[ 2 ] ] * boneWeights[ 2 ] ) +
( bones[ boneIDs[ 3 ] ] * boneWeights[ 3 ] );
You should multiply the vertex (in bone space) with every bone matrix, and add the resulting vectors together taking care of weights, like so:
vec4 temp = vec4(0.0f, 0.0f, 0.0f, 0.0f);
vec4 v = vec4(position, 1.0f);
temp += (bones[boneIDs[0]] * v) * boneWeights[0];
temp += (bones[boneIDs[1]] * v) * boneWeights[1];
temp += (bones[boneIDs[2]] * v) * boneWeights[2];
temp += (bones[boneIDs[3]] * v) * boneWeights[3];
// temp is now the vector in local space that you
// transform with MVP to clip space, or whatever
Let me know if this works!
EDIT: I guess not then. Alright:
the vertex (-0.5, -0.5, 4.0) is transformed to the nonsense value of (-3.694115, -0.499000, -0.051035)
Is that really nonsense? Rotating clockwise ~90 degrees around the Y-axis gives about that value if I just eye-ball it. Your test is "correct". At this point I'm starting to think that there's a problem with the bone hierarchy, or a problem with the interpolation of the keyframes.

Related

Vertex shader doesn't work well with cloned objects

I'm using OpenGL to create a sphere (approximation):
I'm "inflating" the triangle to create an eight of a sphere:
I'm then drawing that octant four times, and each time rotating the model transoformation by 90°, to achieve a hemisphere:
Code related to drawing calls:
for (int i = 0; i < 4; i++) {
model_trans = glm::rotate(model_trans, glm::radians(i * 90.0f), glm::vec3(0.0f, 0.0f, 1.0f));
glUniformMatrix4fv(uniform_model, 1, GL_FALSE, glm::value_ptr(model_trans));
glDrawElementsBaseVertex(GL_TRIANGLES, sizeof(sphere_indices) / sizeof(sphere_indices[0]),
GL_UNSIGNED_INT, 0, (sizeof(grid_vertices)) / (ATTR_COUNT * sizeof(GLfloat)));
}
My goal is to color each vertex based on the angle of its projection in the XY-plane. Since I'm normalizing values, the resulting projection should behave like an trigonometric circle, x value being the cosine value of the angle with the positive end of x-axis. And because the cosine is an continuous function, my sphere should have continual color, too. However, it is not so:
Is this issue caused by cloning the object? That's the only thing I can think of, but it shouldn't matter, since the vertex shader only receives individual vertices. Speaking of which, here is my vertex shader:
#version 150 core
in vec3 position;
/* flat : the color will be sourced from the provoking vertex. */
flat out vec3 Color;
/* transformation matrices */
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main() {
gl_Position = projection * view * model * vec4(position, 1.0);
vec3 vector_proj = vec3(position.x, position.y, 0.0);
normalize(vector_proj);
/* Addition and division only for mapping range [-1, +1] to [0, 1] */
float cosine = (vector_proj.x + 1) / 2;
Color = vec3(cosine);
}
You want to calculate the color associated to the vertex from the world coordinate of the vertex position.
position is the model coordinate, but not the world coordinate. You have to apply the model matrix to position to transform from model space to world space, before calculating vector_proj:
vec4 world_pos = model * vec4(position, 1.0);
vec3 vector_proj = vec3( world_pos.xy, 0.0 );
The parameter to normalize is not an in-out parameter. It is an input parameter, the normalized result is returned from the function:
vector_proj = normalize(vector_proj);
You can simplify the code as follows:
void main()
{
vec4 world_pos = model * vec4(position, 1.0);
gl_Position = projection * view * world_pos;
vec2 vector_proj = normalize(world_pos.xy);
/* Addition and division only for mapping range [-1, +1] to [0, 1] */
float cosine = (vector_proj.x + 1) / 2;
Color = vec3(cosine);
}

How to draw TRIANGLE_FAN with geometry shader created coordinates? (GLSL 3.3)

I want to draw multiple fans with a GS. Each fan should billboard to the camera at each time, which makes it necessary that each vertex is multiplied with MVP matrix.
Since each fan is movable by the user, I came up with the idea to feed the GS with the position.
The following geometry shader works as expected with points as in and output:
uniform mat4 VP;
uniform mat4 sharedModelMatrix;
const int STATE_VERTEX_NUMBER = 38;
layout (shared) uniform stateShapeData {
vec2 data[STATE_VERTEX_NUMBER];
};
layout (triangles) in;
layout (triangle_strip, max_vertices = 80) out;
void main(void)
{
int i;
mat4 modelMatrix = sharedModelMatrix;
modelMatrix[3] = gl_in[0].gl_Position;
mat4 MVP = VP * modelMatrix;
gl_Position = MVP * vec4( 0, 0 , 0, 1 );
EmitVertex(); // epicenter
for (i = 37; i >= 0; i--) {
gl_Position = MVP * vec4( data[i], 0, 1 );
EmitVertex();
}
gl_Position = MVP * vec4( data[0], 0, 1 );
EmitVertex();
}
I tried to run this with glDrawElements, glDrawArrays and glMultiDrawArrays. None of these commands draws the full fan. Each draws the first triangle filled and the remaining vertices as points.
So, the bottom question is: Is it possible to draw a fan with GS created vertices and how?
Outputting fans in a Geometry Shader is very unnatural as you have discovered.
You are currently outputting the vertices in fan-order, which is a construct that is completely foreign to GPUs after primitive assembly. Fans are useful as assembler input, but as far as output is concerned the rasterizer only understands the concept of strips.
To write this shader properly, you need to decompose this fan into a series of individual triangles. That means the loop you wrote is actually going to output the epicenter on each iteration.
void main(void)
{
int i;
mat4 modelMatrix = sharedModelMatrix;
modelMatrix[3] = gl_in[0].gl_Position;
mat4 MVP = VP * modelMatrix;
for (i = 37; i >= 0; i--) {
gl_Position = MVP * vec4( 0, 0 , 0, 1 );
EmitVertex(); // epicenter
gl_Position = MVP * vec4( data[i], 0, 1 );
EmitVertex();
gl_Position = MVP * vec4( data[i-1], 0, 1 );
EmitVertex();
// Fan and strip DNA just won't splice
EndPrimitive ();
}
}
You cannot exploit strip-ordering when drawing this way; you wind up having to end the output primitive (strip) multiple times. About the only possible benefit you get to drawing in fan-order is cache locality within the loop. If you understand that geometry shaders are expected to output triangle strips, why not order your input vertices that way to begin with?

OpenGL Projective Texture Mapping via Shaders

I am trying to implement a simple projective texture mapping approach by using shaders in OpenGL 3+. While there are some examples on the web I am having trouble creating a working example with shaders.
I am actually planning on using two shaders, one which does a normal scene draw, and another for projective texture mapping. I have a function for drawing a scene void ProjTextureMappingScene::renderScene(GLFWwindow *window) and I am using glUseProgram() to switch between shaders. The normal drawing works fine. However, it is unclear to me how I am supposed to render the projective texture on top of an already textured cube. Do I somehow have to use a stencil buffer or a framebuffer object(the rest of the scene should be unaffected)?
I also don't think that my projective texture mapping shaders are correct since the second time I render a cube it shows black. Further, I tried to debug by using colors and only the t component of the shader seems to be non-zero(so the cube appears green). I am overriding the texColor in the fragment shader below just for debugging purposes.
VertexShader
#version 330
uniform mat4 TexGenMat;
uniform mat4 InvViewMat;
uniform mat4 P;
uniform mat4 MV;
uniform mat4 N;
layout (location = 0) in vec3 inPosition;
//layout (location = 1) in vec2 inCoord;
layout (location = 2) in vec3 inNormal;
out vec3 vNormal, eyeVec;
out vec2 texCoord;
out vec4 projCoords;
void main()
{
vNormal = (N * vec4(inNormal, 0.0)).xyz;
vec4 posEye = MV * vec4(inPosition, 1.0);
vec4 posWorld = InvViewMat * posEye;
projCoords = TexGenMat * posWorld;
// only needed for specular component
// currently not used
eyeVec = -posEye.xyz;
gl_Position = P * MV * vec4(inPosition, 1.0);
}
FragmentShader
#version 330
uniform sampler2D projMap;
uniform sampler2D gSampler;
uniform vec4 vColor;
in vec3 vNormal, lightDir, eyeVec;
//in vec2 texCoord;
in vec4 projCoords;
out vec4 outputColor;
struct DirectionalLight
{
vec3 vColor;
vec3 vDirection;
float fAmbientIntensity;
};
uniform DirectionalLight sunLight;
void main (void)
{
// supress the reverse projection
if (projCoords.q > 0.0)
{
vec2 finalCoords = projCoords.st / projCoords.q;
vec4 vTexColor = texture(gSampler, finalCoords);
// only t has non-zero values..why?
vTexColor = vec4(finalCoords.s, finalCoords.t, finalCoords.r, 1.0);
//vTexColor = vec4(projCoords.s, projCoords.t, projCoords.r, 1.0);
float fDiffuseIntensity = max(0.0, dot(normalize(vNormal), -sunLight.vDirection));
outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
}
}
Creation of TexGen Matrix
biasMatrix = glm::mat4(0.5f, 0, 0, 0.5f,
0, 0.5f, 0, 0.5f,
0, 0, 0.5f, 0.5f,
0, 0, 0, 1);
// 4:3 perspective with 45 fov
projectorP = glm::perspective(45.0f * zoomFactor, 4.0f / 3.0f, 0.1f, 1000.0f);
projectorOrigin = glm::vec3(-3.0f, 3.0f, 0.0f);
projectorTarget = glm::vec3(0.0f, 0.0f, 0.0f);
projectorV = glm::lookAt(projectorOrigin, // projector origin
projectorTarget, // project on object at origin
glm::vec3(0.0f, 1.0f, 0.0f) // Y axis is up
);
mModel = glm::mat4(1.0f);
...
texGenMatrix = biasMatrix * projectorP * projectorV * mModel;
invViewMatrix = glm::inverse(mModel*mModelView);
Render Cube Again
It is also unclear to me what the modelview of the cube should be? Should it use the view matrix from the slide projector(as it is now) or the normal view projector? Currently the cube is rendered black(or green if debugging) in the middle of the scene view, as it would appear from the slide projector(I made a toggle hotkey so that I can see what the slide projector "sees"). The cube also moves with the view. How do I get the projection unto the cube itself?
mModel = glm::translate(projectorV, projectorOrigin);
// bind projective texture
tTextures[2].bindTexture();
// set all uniforms
...
// bind VBO data and draw
glBindVertexArray(uiVAOSceneObjects);
glDrawArrays(GL_TRIANGLES, 6, 36);
Switch between main scene camera and slide projector camera
if (useMainCam)
{
mCurrent = glm::mat4(1.0f);
mModelView = mModelView*mCurrent;
mProjection = *pipeline->getProjectionMatrix();
}
else
{
mModelView = projectorV;
mProjection = projectorP;
}
I have solved the problem. One issue I had is that I confused the matrices in the two camera systems (world and projective texture camera). Now when I set the uniforms for the projective texture mapping part I use the correct matrices for the MVP values - the same ones I use for the world scene.
glUniformMatrix4fv(iPTMProjectionLoc, 1, GL_FALSE, glm::value_ptr(*pipeline->getProjectionMatrix()));
glUniformMatrix4fv(iPTMNormalLoc, 1, GL_FALSE, glm::value_ptr(glm::transpose(glm::inverse(mCurrent))));
glUniformMatrix4fv(iPTMModelViewLoc, 1, GL_FALSE, glm::value_ptr(mCurrent));
glUniformMatrix4fv(iTexGenMatLoc, 1, GL_FALSE, glm::value_ptr(texGenMatrix));
glUniformMatrix4fv(iInvViewMatrix, 1, GL_FALSE, glm::value_ptr(invViewMatrix));
Further, the invViewMatrix is just the inverse of the view matrix not the model view (this didn't change the behaviour in my case, since the model was identity, but it is wrong). For my project I only wanted to selectively render a few objects with projective textures. To do this, for each object, I must make sure that the current shader program is the one for projective textures using glUseProgram(projectiveTextureMappingProgramID). Next, I compute the required matrices for this object:
texGenMatrix = biasMatrix * projectorP * projectorV * mModel;
invViewMatrix = glm::inverse(mView);
Coming back to the shaders, the vertex shader is correct except that I re-added the UV texture coordinates (inCoord) for the current object and stored them in texCoord.
For the fragment shader I changed the main function to clamp the projective texture so that it doesn't repeat (I couldn't get it to work with the client side GL_CLAMP_TO_EDGE) and I am also using the default object texture and UV coordinates in case the projector does not cover the whole object (I also removed lighting from the projective texture since it is not needed in my case):
void main (void)
{
vec2 finalCoords = projCoords.st / projCoords.q;
vec4 vTexColor = texture(gSampler, texCoord);
vec4 vProjTexColor = texture(projMap, finalCoords);
//vec4 vProjTexColor = textureProj(projMap, projCoords);
float fDiffuseIntensity = max(0.0, dot(normalize(vNormal), -sunLight.vDirection));
// supress the reverse projection
if (projCoords.q > 0.0)
{
// CLAMP PROJECTIVE TEXTURE (for some reason gl_clamp did not work...)
if(projCoords.s > 0 && projCoords.t > 0 && finalCoords.s < 1 && finalCoords.t < 1)
//outputColor = vProjTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
outputColor = vProjTexColor*vColor;
else
outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
}
else
{
outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
}
}
If you are stuck and for some reason can not get the shaders to work, you can check out an example in "OpenGL 4.0 Shading Language Cookbook" (textures chapter) - I actually missed this, until I got it working by myself.
In addition to all of the above, a great help for debugging if the algorithm is working correctly was to draw the frustum (as wireframe) for the projective camera. I used a shader for frustum drawing. The fragment shader just assigns a solid color, while the vertex shader is listed below with explanations:
#version 330
// input vertex data
layout(location = 0) in vec3 vp;
uniform mat4 P;
uniform mat4 MV;
uniform mat4 invP;
uniform mat4 invMV;
void main()
{
/*The transformed clip space position c of a
world space vertex v is obtained by transforming
v with the product of the projection matrix P
and the modelview matrix MV
c = P MV v
So, if we could solve for v, then we could
genrerate vertex positions by plugging in clip
space positions. For your frustum, one line
would be between the clip space positions
(-1,-1,near) and (-1,-1,far),
the lower left edge of the frustum, for example.
NB: If you would like to mix normalized device
coords (x,y) and eye space coords (near,far),
you need an additional step here. Modify your
clip position as follows
c' = (c.x * c.z, c.y * c.z, c.z, c.z)
otherwise you would need to supply both the z
and w for c, which might be inconvenient. Simply
use c' instead of c below.
To solve for v, multiply both sides of the equation above with
-1
(P MV)
This gives
-1
(P MV) c = v
This is equivalent to
-1 -1
MV P c = v
-1
P is given by
|(r-l)/(2n) 0 0 (r+l)/(2n) |
| 0 (t-b)/(2n) 0 (t+b)/(2n) |
| 0 0 0 -1 |
| 0 0 -(f-n)/(2fn) (f+n)/(2fn)|
where l, r, t, b, n, and f are the parameters in the glFrustum() call.
If you don't want to fool with inverting the
model matrix, the info you already have can be
used instead: the forward, right, and up
vectors, in addition to the eye position.
First, go from clip space to eye space
-1
e = P c
Next go from eye space to world space
v = eyePos - forward*e.z + right*e.x + up*e.y
assuming x = right, y = up, and -z = forward.
*/
vec4 fVp = invMV * invP * vec4(vp, 1.0);
gl_Position = P * MV * fVp;
}
The uniforms are used like this (make sure you use the right matrices):
// projector matrices
glUniformMatrix4fv(iFrustumInvProjectionLoc, 1, GL_FALSE, glm::value_ptr(glm::inverse(projectorP)));
glUniformMatrix4fv(iFrustumInvMVLoc, 1, GL_FALSE, glm::value_ptr(glm::inverse(projectorV)));
// world camera
glUniformMatrix4fv(iFrustumProjectionLoc, 1, GL_FALSE, glm::value_ptr(*pipeline->getProjectionMatrix()));
glUniformMatrix4fv(iFrustumModelViewLoc, 1, GL_FALSE, glm::value_ptr(mModelView));
To get the input vertices needed for the frustum's vertex shader you can do the following to get the coordinates (then just add them to your vertex array):
glm::vec3 ftl = glm::vec3(-1, +1, pFar); //far top left
glm::vec3 fbr = glm::vec3(+1, -1, pFar); //far bottom right
glm::vec3 fbl = glm::vec3(-1, -1, pFar); //far bottom left
glm::vec3 ftr = glm::vec3(+1, +1, pFar); //far top right
glm::vec3 ntl = glm::vec3(-1, +1, pNear); //near top left
glm::vec3 nbr = glm::vec3(+1, -1, pNear); //near bottom right
glm::vec3 nbl = glm::vec3(-1, -1, pNear); //near bottom left
glm::vec3 ntr = glm::vec3(+1, +1, pNear); //near top right
glm::vec3 frustum_coords[36] = {
// near
ntl, nbl, ntr, // 1 triangle
ntr, nbl, nbr,
// right
nbr, ftr, ntr,
ftr, nbr, fbr,
// left
nbl, ftl, ntl,
ftl, nbl, fbl,
// far
ftl, fbl, fbr,
fbr, ftr, ftl,
//bottom
nbl, fbr, fbl,
fbr, nbl, nbr,
//top
ntl, ftr, ftl,
ftr, ntl, ntr
};
After all is said and done, it's nice to see how it looks:
As you can see I applied two projective textures, one of a biohazard image on Blender's Suzanne monkey head, and a smiley texture on the floor and a small cube. You can also see that the cube is partly covered by the projective texture, while the rest of it appears with its default texture. Finally, you can see the green frustum wireframe for the projector camera - and everything looks correct.

Creating a rectangular light source in OpenGL?

I am trying to create a rectangular, sharp-edge light source in OpenGL for one application. My idea is to create a spot light and somehow mask the shape of the shade into a rectangle, the mask of course has to be invisible through camera. When I was trying to implement this idea, it turns out that OpenGL will just skip rendering objects outside the camera, although lighting source outside camera is still valid. This has prevented me from creating the effect I wanted and I am wondering if any of you have come across similar problems before.
To make my question more specific, consider the following case of my question:
spot light at 0,0,5
target object at 0,0,0
mask object (a simple quad parallel to x-axis) at 0,0,3.
When camera is at 0,0,4, light passes through mask object and leaves a rectangular shape on the target object (which is what I wanted), but I can also see the mask object!(while I need the mask object to be invisible)
When I move the camera closer to the target object, say 0,0,2. The mask object is behind the camera and therefore invisible. However, since it's invisible, OpenGL stopped rendering it and therefore the mask object does not have any effect on the target object, and the light shade is still round!
My guess would be to start from a spot light, but separating the angle calculation:
* Project the L vector on the YZ plane to calculate the angle on the X axis
* Project the L vector on the XZ plane to calculate the angle on the Y axis
A very naive implementation of this could be (GLSL):
varying vec3 v_V; // World-space position
varying vec3 v_N; // World-space normal
uniform float time; // global time in seconds since shaderprogram link
uniform vec2 uSpotSize; // Spot size, on X and Y axes
vec3 lp = vec3(0.0, 0.0, 7.0 + cos(time) * 5.0); // Light world-space position
vec3 lz = vec3(0.0, 0.0, -1.0); // Light direction (Z vector)
// Light radius (for attenuation calculation)
float lr = 3.0;
void main()
{
// Calculate L, the vector from model surface to light
vec3 L = lp - v_V;
// Project L on the YZ / XZ plane
vec3 LX = normalize(vec3(L.x, 0.0, L.z));
vec3 LY = normalize(vec3(0.0, L.y, L.z));
// Calculate the angle on X and Y axis using projected vectors just above
float ax = dot(LX, -lz);
float ay = dot(LY, -lz);
// Light attenuation
float d = distance(lp, v_V);
float attenuation = 1.0 / (1.0 + (2.0/lr)*d + (1.0/(lr*lr))*d*d);
float shaded = max(0.0, dot(v_N, L)) * attenuation;
if(ax > cos(uSpotSize.x) && ay > cos(uSpotSize.y))
gl_FragColor = vec4(shaded); // Inside the light influence zone, light it up !
else
gl_FragColor = vec4(0.1); // Outside the light influence zone.
}
Again, this is very naive. For instance, the X/Y projection is done in world-space. If you want to be able to rotate the light rectangle, you might have to introduce a vector pointing to the right of the light.
Thus, you'll be able to get the fragment coordinate in the light's coordinate frame, and with this, you can decide whether to shade the fragment or not.
One solution might be adapting the calculations used for projective texture lookups to simulate a rectangular light source. You did not specify which OpenGL version you're using, but projective texture lookups can even be achieved with the fixed function pipeline
- although they're arguably easier to do in a shader.
Of course, this would not simulate a rectangular area light source, just a point light source that is constrained to a rectangular region.
Using this approach, you'd have to specify view & projection matrices for the light source; where the view matrix is essentially generated by a 'look at' with the light position & it's direction; the projection matrix encodes a perspective projection with your desired horizontal & vertical 'field of view'.
If you just want a rectangular area, you don't even need a texture; A simple vertex/ fragment shader pair could look like this:
( the vertex shader basically transforms the position to the light's clip space, the fragment shader performs the clipping & computes a lambert shading if the fragment is inside the light frustum )
#version 330 core
layout ( location = 0 ) in vec3 vertexPosition;
layout ( location = 1 ) in vec3 vertexNormal;
layout ( location = 3 ) in vec3 vertexDiffuse;
uniform mat4 modelTf;
uniform mat3 normalTf;
uniform mat4 viewTf; // view matrix for render camera
uniform mat4 projectiveTf; // projection matrix for render camera
uniform mat4 viewTf_lightCam; // view matrix of light source
uniform mat4 projectiveTf_lightCam; // projective matrix of light source
uniform vec4 lightPosition_worldSpace;
out vec3 diffuseColor;
out vec3 normal_worldSpace;
out vec3 toLight_worldSpace;
out vec4 position_lightClipSpace;
void main()
{
diffuseColor = vertexDiffuse;
vec4 vertexPosition_worldSpace = modelTf * vec4( vertexPosition, 1.0 );
normal_worldSpace = normalTf * vertexNormal;
toLight_worldSpace = normalize( lightPosition_worldSpace - vertexPosition_worldSpace ).xyz;
position_lightClipSpace = projectiveTf_lightCam * viewTf_lightCam * vertexPosition_worldSpace;
gl_Position = projectiveTf * viewTf * vertexPosition_worldSpace;
}
#version 330 core
layout ( location=0 ) out vec4 fragColor;
in vec3 diffuseColor;
in vec3 normal_worldSpace;
in vec3 toLight_worldSpace;
in vec4 position_lightClipSpace;
uniform vec3 ambientLight;
void main()
{
// clipping against the light frustum
bool isInsideX = ( position_lightClipSpace.x <= position_lightClipSpace.w && position_lightClipSpace.x >= -position_lightClipSpace.w );
bool isInsideY = ( position_lightClipSpace.y <= position_lightClipSpace.w && position_lightClipSpace.y >= -position_lightClipSpace.w );
bool isInsideZ = ( position_lightClipSpace.z <= position_lightClipSpace.w && position_lightClipSpace.z >= -position_lightClipSpace.w );
bool isInside = isInsideX && isInsideY && isInsideZ;
vec3 N = normalize( normal_worldSpace );
vec3 L = normalize( toLight_worldSpace );
vec3 lightColor = isInside ? max( dot( N, L ), 0.0 ) * vec3( 0.99, 0.66, 0.33 ) : vec3( 0.0 );
fragColor = vec4( clamp( ( ambientLight + lightColor ) * diffuseColor, vec3( 0.0 ), vec3( 1.0 ) ), 1.0 );
}
There are a lot of good papers on this, Brian Karis wrote about it in 2013 (in regards to UE4) here:
https://de45xmedrsdbp.cloudfront.net/Resources/files/2013SiggraphPresentationsNotes-26915738.pdf
And more recently Michal Drobot wrote an article about area lights in GPU Pro 5.
If you are using a metalness workflow you can also crank up the roughness as an approximation to area lighting, a technique introduced by Tri-Ace:
http://www.fxguide.com/featured/game-environments-parta-remember-me-rendering/

optimizing cubes rendering with geometry shader

In my first opengl 'voxel' project I'm using geometry shader to create cubes from gl_points and it works pretty well but I'm sure it can be done better. In the alpha color I'm passing info about which faces should be rendered ( to skip faces adjacent to other cubes) then vertices for visible faces are created using 'reference' cube definition. Every point is multiplied by 3 matrices. Instinct tells me that maybe whole face could be multiplied by them instead of every point but my math skills are poor so please advise.
#version 330
layout (points) in;
layout (triangle_strip,max_vertices=24) out;
smooth out vec4 oColor;
in VertexData
{
vec4 colour;
//vec3 normal;
} vertexData[];
uniform mat4 cameraToClipMatrix;
uniform mat4 worldToCameraMatrix;
uniform mat4 modelToWorldMatrix;
const vec4 cubeVerts[8] = vec4[8](
vec4(-0.5 , -0.5, -0.5,1), //LB 0
vec4(-0.5, 0.5, -0.5,1), //L T 1
vec4(0.5, -0.5, -0.5,1), //R B 2
vec4( 0.5, 0.5, -0.5,1), //R T 3
//back face
vec4(-0.5, -0.5, 0.5,1), // LB 4
vec4(-0.5, 0.5, 0.5,1), // LT 5
vec4(0.5, -0.5, 0.5,1), // RB 6
vec4(0.5, 0.5, 0.5,1) // RT 7
);
const int cubeIndices[24] = int [24]
(
0,1,2,3, //front
7,6,3,2, //right
7,5,6,4, //back or whatever
4,0,6,2, //btm
1,0,5,4, //left
3,1,7,5
);
void main()
{
vec4 temp;
int a = int(vertexData[0].colour[3]);
//btm face
if (a>31)
{
for (int i=12;i<16; i++)
{
int v = cubeIndices[i];
temp = modelToWorldMatrix * (gl_in[0].gl_Position + cubeVerts[v]);
temp = worldToCameraMatrix * temp;
gl_Position = cameraToClipMatrix * temp;
//oColor = vertexData[0].colour;
//oColor[3]=1;
oColor=vec4(1,1,1,1);
EmitVertex();
}
a = a - 32;
EndPrimitive();
}
//top face
if (a >15 )
...
}
------- updated code:------
//one matrix to transform them all
mat4 mvp = cameraToClipMatrix * worldToCameraMatrix * modelToWorldMatrix;
//transform and store cube verts for future use
for (int i=0;i<8; i++)
{
transVerts[i]=mvp * (gl_in[0].gl_Position + cubeVerts[i]);
}
//btm face
if (a>31)
{
for (int i=12;i<16; i++)
{
int v = cubeIndices[i];
gl_Position = transVerts[v];
oColor = vertexData[0].colour*0.55;
//oColor = vertexData[0].colour;
EmitVertex();
}
a = a - 32;
EndPrimitive();
}
In OpenGL, you don't work with faces (or lines, for that matter), so you can't apply transformations to a face. You need to do it to the vertices that compose that face, as you're doing.
On possible optimization is that you don't need to separate out the matrix transformations, as you do. If you multiple them once in your application code, and pass them as a single uniform into your shader, that will save some time.
Another optimization would be to transform the eight cube vertices in a loop at the beginning, store them to a local array, and then reference their transformed positions in your if logic. Right now, if you render every face of the cube, you're transforming 24 vertices, each one three times.