I am currently working on animating an model using joint's and vertex weight's. My question isn't program specific but GLSL based. In my shader I have the following info loaded
uniform mat4 projection,camera,model;
in vec3 vertex;
struct Joint
{
mat4 invBind;
mat4 global;
};
uniform Joint transforms[16];
in ivec4 jointIndices;
in vec4 weights;
Here
Joint in an structure that is used to transform an vertex. It has 2 types of info
invBind:- this matrix is the inverse of an joint's local bind transform in global/model space which is used to transform the model from model space to local joint space
global:- this matrix is used to transform an vertex from an joint's local space to it's final global/model[or world] space
global=parent_global_joint_transform*joint_local_bind_transform.
I have loaded my model from blender .dae file and so "joint_local_bind_transform" is transposed and then used
in ivec4 jointIndices:- is the list of all possible joint's that affect this vertex
in vec4 weights:- is the weight associated with the corresponding joint affecting this vertex
I have an array of size 16 of these joint's because there are that many joint's in my model
So with all this information how do we calculate gl_Position?. Is there more information that need's to be loaded as uniforms or as vertex attributes?. Sorry for my noobish question but many tutorial's don't give an straight forward answer.
An simple vertex shader code answer would be nice. Thank You
Without any guarantee of correctness, something like the following should work for simple linear blend skinning. If it is not working directly, you should be able to understand what is going on and adapt accordingly:
//make a 4D vector from the vertex' position
vec4 position = vec4(vertex, 1.0);
//accumulate the final position
vec4 p = vec4(0, 0, 0, 0);
for(int i = 0; i < 4; ++i)
{
Joint joint = transforms[jointIndices[i]];
p += weights[i] * joint.global * joint.invBind * position;
}
gl_Position = p;
In the loop, you iterate all influencing bones. For every bone, you calculate the vertex' position in the bone's coordinate system and transform it back to the global coordinate system according to the bone's orientation in the current frame. Finally, you just blend the result for all influencing bones.
To reduce the amount of data you are transferring to the GPU, you can also precalculate joint.global * joint.invBind.
Related
I need for my little program to scale a gizmo by camera distance to conveniently move objects at anytime. I think I have two options:
Calculate distance from gizmo to camera, make a matrix for scaling, multiply all points in cycle:
glm::mat4 scaleMat;
scaleMat= glm::scale(scaleMat, glm::vec3(glm::distance(cameraPos,gizmoPos)));
for (int i = 0; i < vertices.size(); i++)
{
vertices[i] = glm::vec3(scaleMat * glm::vec4(vertices[i], 1.0));
}
Clear the scale component of the view (lookAt) matrix only for gizmo.
If I use the first way my scale of gizmo accumulates the scale and increase size of gizmo any time when I change camera position. I think the second way is more accurate, but how to do that? Thank you!
If you want to apply different scaling each time to the same model, you should not manipulate vertexes (you should never do, in fact), but the Model Matrix. Through that you can manipulate the object without processing the vertexes via code.
I would go with something like this:
glm::mat4 modelMatrix(1.0f);
modelMatrix = glm::scale(modelMatrix,glm::vec3(glm::distance(cameraPos,gizmoPos)));
This will give you the scaled Model View matrix. Now you only need to pass it to your Vertex Shader.
You should have, roughly, something like this:
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
uniform mat4 M;
void main(){
gl_Position = M * vec4(vertexPosition_modelspace,1);
}
I have not tested the code but it is really similar to the code of one of my projects. There I keep my model matrix so the scaling is accumulated, but if you pass to the vertex shader a brand new matrix each time, nothing will be remembered.
If you need more about passing the uniform to the shader, have a look here at my current project's code.
You can find the scaling inside TARDIS::applyScaling and the shader loading in main.cpp
I'm developping a little 3D Engine using OpenGL and GLSL. I currently use Texture Buffer Objects (TBOs) to store all my matrices (Proj, View, Model and Shadow Matrices). But I did some researches on what is the best way to handle matrices (I mean the most efficient way) within a graphic engine, without any success. The goal is to store a maximum of matrices into a minimum number of TBO and occur a minimum of state changes and a minimum of exchanges between the GPU and client code (glBufferSubData).
I propose 2 different methods (with their advantages and disadvantages):
Here's a scene example:
1 Camera (1 ProjMatrix, 1 ViewMatrix)
5 boxes (5 ModelMatrix)
Here's an example of a simple vertex shader I use:
#version 400
/*
** Vertex attributes.
*/
layout (location = 0) in vec4 VertexPosition;
layout (location = 1) in vec2 VertexTexture;
/*
** Uniform matrix buffer.
*/
uniform samplerBuffer matrixBuffer;
/*
** Matrix buffer offset.
*/
uniform int MatrixBufferOffset;
/*
** Output variables.
*/
out vec2 TexCoords;
/*
** Returns matrix4x4 from texture cache.
*/
mat4 Get_Matrix(int offset)
{
return (mat4(texelFetch(
matrixBuffer, offset), texelFetch(
matrixBuffer, offset + 1), texelFetch(matrixBuffer, offset + 2),
texelFetch(matrixBuffer, offset + 3)));
}
/*
** Vertex shader entry point.
*/
void main(void)
{
TexCoords = VertexTexture;
{
mat4 ModelViewProjMatrix = Get_Matrix(
MatrixBufferOffset);
gl_Position = ModelViewProjMatrix * VertexPosition;
}
}
1) The method I currently use: in my vertex shader I use to use ModelViewProjMatrix (needed for rasterization(gl_Position)), ModelViewMatrix (for lighting calculations) and ModelMatrix. So to avoid useless calculation within the vertex shader I've decided to store the ModelViewProjMatrix, the ModelViewMatrix and the ModelMatrix for each mesh node inlined in the TBO as follow:
TBO = {[ModelViewProj_Box1][ModelView_Box1][Model_Box1]|[ModelViewProj_Box2]...}
Advantages: I don't need to compute the product Proj * View * Model (ModelViewProj for example) for each vertex shader (the matrices are pre-calculated).
Disadvantages: if I move the camera I need to update all the ModelViewProj and ModelView matrices. So, a lot of informations to update.
2) I thought about an other way, I think more efficient: store once the projection matrix, once the view matrix and finally each box scene node model matrix once again this way:
TBO = {[ProjMatrix][ViewMatrix][ModelMatrix_Box1][ModelMatrix_Box2]...}
So my vertex shader will look like this:
#version 400
/*
** Vertex attributes.
*/
layout (location = 0) in vec4 VertexPosition;
layout (location = 1) in vec2 VertexTexture;
/*
** Uniform matrix buffer.
*/
uniform samplerBuffer matrixBuffer;
/*
** Matrix buffer offset.
*/
uniform int MatrixBufferOffset;
/*
** Output variables.
*/
out vec2 TexCoords;
/*
** Returns matrix4x4 from texture cache.
*/
mat4 Get_Matrix(int offset)
{
return (mat4(texelFetch(
matrixBuffer, offset), texelFetch(
matrixBuffer, offset + 1), texelFetch(matrixBuffer, offset + 2),
texelFetch(matrixBuffer, offset + 3)));
}
/*
** Vertex shader entry point.
*/
void main(void)
{
TexCoords = VertexTexture;
{
mat4 ProjMatrix = Get_Matrix(MatrixBufferOffset);
mat4 ViewMatrix = Get_Matrix(MatrixBufferOffset + 4);
mat4 ModelMatrix = Get_Matrix(MatrixBufferOffset + 8);
gl_Position = ProjMatrix * ViewMatrix * ModelMatrix * VertexPosition;
}
}
Advantages: The TBO contains the exact number of matrices used. The update is highly targeted (if I move the camera I only updates the view matrix, if I resize the window I only updates the projection matrix and finally if a object is moving only its model matrix will be updated).
Disadvantages: I need to compute fo each vertex within the vertex shader the ModelViewProjMatrix. Plus, if the scene is composed of a huge number of object with each of them owning a different model matrix, I probably need to create a new TBO. Consequently, I will loose the proj/view matrix information because I won't be connect to the right TBO, which bring us to my third method.
3) Store the Projection and View matrix in a TBO and all the other model matrices within another or others TBO(s) as follow:
TBO_0 = {[ProjMatrix][ViewMatrix]}
TBO_1 = {[ModelMatrix_Box1][ModelMatrix_Box2]...}
What do you think of my 3 methods ? Which one is the best for you?
Thanks a lot in advance for your help!
The solution 3 is what most engines do, except they use uniform buffers (constant buffers) instead of texture buffers. Also they don't generally group all the model matrices together in the same buffer, they usually are grouped by object type (because same objects are drawn at once with instancing) and sometimes by frequency of update (objects that never move are in the same buffer so that it never needs to be updated).
Also glBufferSubData can be pretty slow; updating a buffer is often slower than just binding a different one, because of all the synchronization happening inside the driver. There is a very good book chapter about that, freely available on the Internet, called "OpenGL Insights: Asynchronous Buffer Transfers" (Google it to find it).
EDIT: The nvidia article you linked in the comments is very interesting. They recommend using glMultiDrawElements to make several draw calls at once (that's the main trick, everything else is there because of that decision). That can reduce the CPU work in the driver a lot, but that also mean that it's a lot more complicated to provide all the data required to draw the objects: you have to build/update bigger buffers for the matrices / material values and, you also need to use something like bindless textures to be able to have different textures for each object. So, interesting, but more complicated.
And glMultiDrawElements is only important if you want to draw a lot of different objects. Their examples have 68000-98000 different meshes, that's really a lot. In a game, for example, you usually have lots of instances of the same objects, but only a few hundred of different objects (maximum). In the end, it depends on what your 3D engine needs to render.
I'm trying to implement shadow mapping in my game. The shaders you see below result in correctly drawn shadows for the game's map, but all the models walking around on the map are completely black.
Here's a screenshot:
I suspect the problem lies with the calculation of the world position. The gl_Vertex is not transformed in any way. Because the map is generated with absolute world coordinates, the transformation with the light matrix results in a correct relative position that can be used to perform the shadow mapping.
However, my 3D models are all very close to the origin. So if their coordinates are plainly transformed using the light matrix, they will most likely never be lit.
I'm wondering if this could be the case, and if so, how I could fix it.
Here's my vertex shader:
#version 120
uniform mat4x4 LightMatrixValue;
varying vec4 shadowMapPosition;
varying vec3 worldPos;
void main(void)
{
vec4 modelPos = gl_Vertex;
worldPos=modelPos.xyz/modelPos.w;
vec4 lightPos = LightMatrixValue*modelPos;
shadowMapPosition = 0.5 * (lightPos.xyzw +lightPos.wwww);
gl_Position = ftransform();
}
And the fragment shader:
varying vec4 shadowMapPosition;
varying vec3 worldPos;
uniform sampler2D depthMap;
uniform vec4 LightPosition;
void main(void)
{
vec4 textureColour = gl_Color;
vec3 realShadowMapPosition = shadowMapPosition.xyz/shadowMapPosition.w;
float depthSm = texture2D(depthMap, realShadowMapPosition.xy).r;
if (depthSm < realShadowMapPosition.z-0.0001)
{
textureColour = vec4(0, 0, 0, 1);
}
gl_FragColor= textureColour;
}
I write this here because it will not fit above, i hope it will solve you problem.
For rendering you on the one hand have your models with their model matrix to position the element, on the other hand you have you view and projection matrix that transforms your model into the screen space.
For creating your shadow map (simplest approach, and i guess the one you have chosen) you render the scene from the view of the light source, so you apply the view and projection matrix of you light source, which will map the x, y and z value to screen space. The x and y values are for the position in the image, the z is for the depth buffer test and for the color you write in your color buffer (you later use as shadow map).
For the final rendering you load this shadow map and the view and projection matrix of the light to your shader. For the display in the scene you apply your view and project matrix of the camera to the vertexes, for the shadow map lookup you apply the view and projection matrix of the light to the vertex (like you did with the rendering for the shadow map). When you apply the lights view and projection matrix to the vertex you have the same mapping as in the shadow map pass, now you only need to transform your x and y coordinates to the texture coordinates and lookup the stored z value, which you compare with the calculated one.
Transforming the models world position into screen space (from the lights point of view)
This part is often done in the vertex or geometry shader:
shadowMapPosition = matLightViewProjection * modelWoldPos;
This is often done in the fragment shader:
shadowMapPosition = shadowMapPosition / shadowMapPosition.w ;
// Add an offset to prevent self-shadowing and moiré pattern
shadowMapPosition.z += 0.0005;
//lookup the stored z value
float distanceFromLight = texture2D(depthMap,shadowMapPosition.xz).z;
Now just compare the distanceFromLight with the shadowMapPosition.z to see if the object is in shadow or not.
So in your second pass you will do the steps of the shadow mapping again, except that you don't draw the data you calculated but compare it to the one you calculated in the pass before.
I am switching my shaders away from relying on using the OpenGL fixed function pipeline. Previously, OpenGL automatically transformed the lights for me, but now I need to do it myself.
I have the following GLSL code in my vertex shader:
//compute all of the light vectors. we want normalized vectors
//from the vertex towards the light
for(int i=0;i < NUM_LIGHTS; i++)
{
//if w is 0, it's a directional light, it is a vector
//pointing from the world towards the light source
if(u_lightPos[i].w == 0)
{
vec3 transformedLight = normalize(vec3(??? * u_lightPos[i]));
v_lightDir[i] = transformedLight;
}
else
{
//this is a positional light, transform it by the view*projection matrix
vec4 transformedLight = u_vp_matrix * u_lightPos[i];
//subtract the vertex from the light to get a vector from the vertex to the light
v_lightDir[i] = normalize(vec3(transformedLight - gl_Position));
}
}
What do I replace the ??? with in the line vec3 transformedLight = normalize(vec3(??? * u_lightPos[i]));?
I already have the following matrices as inputs:
uniform mat4 u_mvp_matrix;//model matrix*view matrix*projection matrix
uniform mat4 u_vp_matrix;//view matrix*projection matrix
uniform mat3 u_normal_matrix;//normal matrix for this model
I don't think it's any of these. It's obviously not the mvp matrix or the normal matrix, because those are specific to the current model, and I'm trying to convert to view coordinates. I don't think it's the vp matrix either though, because that includes translations and I don't want to translate a vector.
Can I compute the matrix that I need from what is currently given to the shader, and if so, what do I need to compute? If not, what matrix do I need to add and how should it be computed?
You can use the same view_projection matrix. Yes, that matrix does include a translation, but because the light vector has w = 0 the matrix multiplication will not apply it.
You are supposed to transform the light position/direction into view space on the CPU, not in a shader. It's a uniform; it's done once per frame per light. It doesn't change, so there's no need to have every vertex compute it. It's a waste of time.
VC++ 2010, OpenGL, GLSL, SDL
I am moving over to shaders, and have run into a problem that originally occured while working with the ogl pipeline. That is, the position of the light seems to point in whatever direction my camera faces. In the ogl pipeline it was just the specular highlight, which was fixable with:
glLightModelf(GL_LIGHT_MODEL_LOCAL_VIEWER, 1.0f);
Here are the two shaders:
Vertex
varying vec3 lightDir,normal;
void main()
{
normal = normalize(gl_NormalMatrix * gl_Normal);
lightDir = normalize(vec3(gl_LightSource[0].position));
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = ftransform();
}
Fragment
varying vec3 lightDir,normal;
uniform sampler2D tex;
void main()
{
vec3 ct,cf;
vec4 texel;
float intensity,at,af;
intensity = max(dot(lightDir,normalize(normal)),0.0);
cf = intensity * (gl_FrontMaterial.diffuse).rgb +
gl_FrontMaterial.ambient.rgb;
af = gl_FrontMaterial.diffuse.a;
texel = texture2D(tex,gl_TexCoord[0].st);
ct = texel.rgb;
at = texel.a;
gl_FragColor = vec4(ct * cf, at * af);
}
Any help would be much appreciated!
The question is: What coordinate system (reference frame) do you want the lights to be in? Probably "the world".
OpenGL's fixed-function pipeline, however, has no notion of world coordinates, because it uses a modelview matrix, which transforms directly from eye (camera) coordinates to model coordinates. In order to have “fixed” lights, you could do one of these:
The classic OpenGL approach is to, every frame, set up the modelview matrix to be the view transform only (that is, be the coordinate system you want to specify your light positions in) and then use glLight to set the position (which is specified to apply the modelview matrix to the input).
Since you are using shaders, you could also have separate model and view matrices and have your shader apply both (rather than using ftransform) to vertices, but only the view matrix to lights. However, this means more per-vertex matrix operations and is probably not an especially good idea unless you are looking for clarity rather than performance.