Transforming Vertices by Joints with Smooth Skinning - c++

I'm having an issue when trying to animate a mesh within my game using smooth skinning. I'm exporting a mesh, it's joints, and all animation data from Maya and it's importing properly within the game. I've also got my joints displayed in-game as spheres, just to make sure they are animating properly, and they are. I'm having trouble when transforming the vertices to bone space, and I'm clueless as to why. Here's a basic run-down of what I'm currently doing.
Precalculating the inverse of each bone in it's bind-pose. The bone
hierarchy is also set up here, I'm only setting their local matrices
as their world matrices are calculated upon request.
Interpolating matrices between keyframes, this appears to be working
as the spheres are animating correctly.
Copying the meshes vertices from the bind-pose into a seperate buffer
to be modified
Transforming each vertex:
For each vertex
For each influencing bone in vertex
Vertex tempvert;
tempvert = vertex.position * local bone inverse matrix;
tempvert = tempvert * world interpolated bone transform;
tempvert *= influence weight;
vertex += tempvert;
Update the GPU with the transformed vertices
I've also tried switching around local and world transforms for the bones, but to no avail. I'm pretty sure I'm doing something stupid, I just want to make sure that's the case before I go tearing apart my math library.
Thanks for the help.

Related

Converting an equiangular cubemap to an equirectangular one

I am making a retro-style game with OpenGL, and I want to draw my own cubemaps for it. Here is an example of one:
As you can tell, there is no perspective warping anywhere; each face is fully equiangular. When using this as a cubemap, the result is this:
As you can see, it looks box-y, and not spherical at all. I know of a solution to this, which is to remap each point on the cubemap to a a sphere position. I have done this manually by creating a sphere mesh and mapping the cubemap texture onto it (and then rendering that to an environment map), but this is time-consuming and complicated.
I seek a different solution: in my fragment shader, I hope to remap the sampling ray to a sphere position, instead of a cube position. Here is my original fragment shader, without any changes:
#version 400 core
in vec3 cube_edge;
out vec3 color;
uniform samplerCube skybox_sampler;
void main(void) {
color = texture(skybox_sampler, cube_edge).rgb;
}
I can get a ray that maps to the sphere by just normalizing cube_edge, but that doesn't change anything, for some reason. After messing around a bit, I tried this mapping, which almost works, but not quite:
vec3 sphere_edge = vec3(cube_edge.x, normalize(cube_edge).y, cube_edge.z);
As you can see, some faces become spherical in nature, whereas the top face warps inwards, instead of outwards.
I also tried the results from this site: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html, but the faces were not curved outwards enough.
I have been stuck on this for so long now - if you know how I can change my cube to sphere mapping in my fragment shader, or if that's even possible, please let me know!
As you can tell, there is no perspective warping anywhere; each face is fully equiangular.
This premise is incorrect. You hand-drew some images; this doesn't make them equiangular.
'Equiangular cubemap' (EAC) specifically means a cubemap remapped by this formula (section 2.4):
u = 4/pi * atan(u)
v = 4/pi * atan(v)
Let's recognize first that the term is misleading, because even though EAC aims at reducing the variation in sampling rate, the sampling rate is not constant. In fact no 2d projection of any part of a sphere can truly be equi-angular; this is a mathematical fact.
Nonetheless, we can try to apply this correction. Implemented in GLSL fragment shader as:
d /= max(abs(d.x), max(abs(d.y), abs(d.z));
d = atan(d)/atan(1);
gives the following result:
Compare it with the uncorrected d:
As you can see the EAC projection shrinks the pixels in the middle by a little bit, and expands them near the corners, so that they cover more equal area.
Instead, it appears that you want a cylindrical projection around the horizon. It can be implemented like so:
d /= length(d.xy);
d.xy /= max(abs(d.x), abs(d.y));
d.xy = atan(d.xy)/atan(1);
Which gives the following result:
However there's no artifact-free way to fit the top/bottom square faces of the cube onto the circular faces of the cylinder -- which is why you see the artifacts there.
Bottom-line: you cannot fit the image that you drew onto a sphere in a visually pleasing way. You should instead re-focus your effort on alternative ways of authoring your environment map. I recommend you try using an equidistant cylindrical projection for the horizon, cap it with solid colors above/below a fixed latitude, and use billboards for objects that cannot be represented in that projection.
Your problem is that the size of the geometry on which the environment is placed is too small. You are not looking at the environment but at the inside of a small cube in which you are sitting. The environment map should behave as if you are always in the center of the map and the environment is infinitely far away. I suggest to draw the environment map on the far plane of the viewing frustum. You can do this by setting the z-component of the clip space position equal to the w-component in the vertex shader. If you set z to w, you guarantee that the final z value of the position will be 1.0. This is the z value of the far plane. (You can do that with Swizzling gl_Position = clipPos.xyww). It is quite sufficient to draw a cube and wrap the environment by looking up the map with the interpolated vertices of the cube. In the case of a samplerCube, the 3-dimensional texture coordinate is treated as a direction vector. You can use the vertex coordinate of the cube to look up the texture.
Vertex shader:
cube_edge = inVertex.xyz;
vec4 clipPos = projection * view * vec4(inVertex.xyz, 1.0);
gl_Position = clipPos.xyww;
Fragment shader:
color = texture(skybox_sampler, cube_edge).rgb;
The solution is also explained in detail at LearnOpenGL - Cubemap.

Opengl Lighting Artifact (calculating light before translation)

I am trying to create a scene in opengl and I am having trouble with my lighting. I believe it is something to do with translating my models from the origin of the world into their respective places.
I only have 1 light in my scene placed on the right in the centre of the world, however you can see the light on the wall at the front of the scene.
I have wrote my own shaders. I suspect that I'm calculating the lighting too early as it seems that it is being calculated before the models are being translated around the world, that or I am using local coordinates rather than world coordinates (I think thats right anyway...).
(please ignore the glass, they are using a global light and a different shader)
Does anyone know if this is indeed the case or where would be the best place to find a solution.
Below is how I call rendering my models.
glUseProgram(modelShader);
//center floor mat
if (floorMat)
{
glUniformMatrix4fv(modelShader_modelMatrixLocation, 1, GL_FALSE, (GLfloat*)&(modelTransform.M));
floorMat->setTextureForModel(carpetTexture);
floorMat->renderTexturedModel();
}
https://www.youtube.com/watch?annotation_id=annotation_1430411783&feature=iv&index=86&list=PLRwVmtr-pp06qT6ckboaOhnm9FxmzHpbY&src_vid=NH68sIdF-48&v=P3DQXzyjswQ
Turns out I was not calculating the lighting in world space.
Rather than using the transposed modelworldposition I was just using the plain vertex position

HLSL fixed lighting location

Hi im trying to create a shader for my 3D models with materials and fog. Everything works fine but the light direction. I'm not sure what to set it to so I used a fixed value, but when I rotate my 3D model (which is a simple textured sphere) the light rotates with it. I want to change my code so that my light stays in one place according to the camera and not the object itself. I have tried multiplying the view matrix by the input normals but the same result occurs.
Also, should I be setting the light direction according to the camera instead?
EDIT: removed pastebin link since that is against the rules...
Use camera depended values just for transforming vertex pos to view and projected position (needed in shaders for clipping and rasterizer stage). The video cards needs to know, where to draw your pixel.
For lighting you normally pass additional to the camera transformed value the world position of the vertex and the normal in world position to the needed shader stages (i.e pixel shader stage for phong lighting).
So you can set your light position, or better light direction in world space coordinate system as global variable to your shaders. With that the lighting is independent of the camera view position.
If you want to have a effect like using a flashlight. You can set the lightposition to camera position, and light direction to your look direction. So the bright parts are always in the center of your viewing frustum.
good luck

Matrix calculations for gpu skinning

I'm trying to do skeletal animation in OpenGL using Assimp as my model import library.
What exactly do I need to the with the bones' offsetMatrix variable? What do I need to multiply it by?
Let's take for instance this code, which I used to animate characters in a game I worked. I used Assimp too, to load bone information and I read myself the OGL tutorial already pointed out by Nico.
glm::mat4 getParentTransform()
{
if (this->parent)
return parent->nodeTransform;
else
return glm::mat4(1.0f);
}
void updateSkeleton(Bone* bone = NULL)
{
bone->nodeTransform = bone->getParentTransform() // This retrieve the transformation one level above in the tree
* bone->transform //bone->transform is the assimp matrix assimp_node->mTransformation
* bone->localTransform; //this is your T * R matrix
bone->finalTransform = inverseGlobal // which is scene->mRootNode->mTransformation from assimp
* bone->nodeTransform //defined above
* bone->boneOffset; //which is ai_mesh->mBones[i]->mOffsetMatrix
for (int i = 0; i < bone->children.size(); i++) {
updateSkeleton (&bone->children[i]);
}
}
Essentially the GlobalTransform as it is referred in the tutorial Skeletal Animation with Assimp or properly the transform of the root node scene->mRootNode->mTransformation is the transformation from local space to global space. To give you an example, when in a 3D modeler (let's pick Blender for instance) you create your mesh or you load your character, it is usually positioned (by default) at the origin of the Cartesian plane and its rotation is set to the identity quaternion.
However you can translate/rotate your mesh/character from the origin (0,0,0) to somewhere else and have in a single scene even multiple meshes with different positions. When you load them, especially if you do skeletal animation, it is mandatory to translate them back in local space (i.e. back at the origin 0,0,0 ) and this is the reason why you have to multiply everything by the InverseGlobal (which brings back your mesh to local space).
After that you need to multiply it by the node transform which is the multiplication of the parentTransform (the transformation one level up in the tree, this is the overall transform) the transform (formerly the assimp_node->mTransformation which is just the transformation of the bone relative to the node's parent) and your local transformation (any T * R) you want to apply to do: forward kinematic, inverse kinematic or key-frame interpolation.
Eventually there is the boneOffset (ai_mesh->mBones[i]->mOffsetMatrix) that transforms from mesh space to bone space in bind pose as stated in the documentation.
Here there is a link to GitHub if you want to look at the whole code for my Skeleton class.
Hope it helps.
The offset matrix defines the transform (translation, scale, rotation) that transforms the vertex in mesh space, and converts it to "bone" space. As an example consider the following vertex and a bone with the following properties;
Vertex Position<0, 1, 2>
Bone Position<10, 2, 4>
Bone Rotation<0,0,0,1> // Note - no rotation
Bone Scale<1, 1, 1>
If we multiply a vertex by the offset Matrix in this case we would get a vertex position of <-10, -1, 2>.
How do we use this? You have two options on how to use this matrix which is down to how we store the vertex data in the vertex buffers. The options are;
1) Store the mesh vertices in mesh space
2) Store the mesh vertices in bone space
In the case of #1, we would take the offsetMatrix and apply it to the vertices that are influenced by the bone as we build the vertex buffer. And then when we animate the mesh, we later apply the animated matrix for that bone.
In the case of #2, we would use the offsetMatrix in combination with the animation matrix for that bone when transforming the vertices stored in the vertex buffer. So it would be something like (Note: you may have to switch the matrix concatenations around here);
anim_vertex = (offset_matrix * anim_matrix) * mesh_vertex
Does this help?
As I already assumed, the mOffsetMatrix is the inverse bind pose matrix. This tutorial states the correct transformations that you need for linear blend skinning:
You first need to evaluate your animation state. This will give you a system transform from animated bone space to world space for every bone (GlobalTransformation in the tutorial). The mOffsetMatrix is the system transform from world space to bind pose bone space. Therefore, what you do for skinning is the following (assuming that a specific vertex is influenced by a single bone): Transform the vertex to bone space with mOffsetMatrix. Now assume an animated bone and transform the intermediate result back from animated bone space to world space. So:
boneMatrix[i] = animationMatrix[i] * mOffsetMatrix[i]
If the vertex is influenced by multiple bones, LBS simply averages the results. That's where the weights come into play. Skinning is usually implemented in a vertex shader:
vec4 result = vec4(0);
for each influencing bone i
result += weight[i] * boneMatrix[i] * vertexPos;
Usually, the maximum number of influencing bones is fixed and you can unroll the for loop.
The tutorial uses an additional m_GlobalInverseTransform for the boneMatrix. However, I have no clue why they do that. Basically, this undoes the overall transformation of the entire scene. Probably it is used to center the model in the view.

Using GLSL shaders + lighting / normals

I've got this not-so-small-anymore tile-based game, which is my first real OpenGL project. I want to render every tile as a 3D object. So at first I created some objects, like a cube and a sphere, provided them with vertex normals and rendered them in immediate mode with flat shading. But since I've got like 10.000 objects per level, it was a bit slow. So I put my vertices and normals into VBOs.
That's where I encountered the first problem: Before using VBOs I just push()ed and pop()ed matrices for every object and used glTranslate / glRotate to place them in my scene. But when I did the same with VBOs, the lighting started to behave strangely. Instead of a fixed lighting position behind the camera, the light seemed to rotate with my objects. When moving around them 180 degrees I could see only a shadow.
So i did some research. I could not find any answer to my specific problem, but I read, that instead of using glTranslate/glRotate one should implement shaders and provide them with uniform matrices.
I thought "perhaps that could fix my problem too" and implemented a first small vertex shader program which only stretched my objects a bit, just to see if I could get a shader to work before focusing on the details.
void main(void)
{
vec4 v = gl_Vertex;
v.x = v.x * 0.5;
v.y = v.y * 0.5;
gl_Position = gl_ModelViewProjectionMatrix * v;
}
Well, my objects get stretched - but now OpenGLs flat shading is broken. I just get white shades. And I can't find any helpful information. So I got a few questions:
Can I only use one shader at a time, and when using my own shader, OpenGLs flat shading is turned off? So do I have to implement flat shading myself?
What about my vector normals? I read somewhere, that there is something like a normal-matrix. Perhaps I have to apply operations to my normals as well when modifying vertices?
That your lighting gets messed up with matrix operations changes means, that your calls to glLightfv(..., GL_POSITION, ...) happen in the wrong context (not the OpenGL context, but state of matrices, etc.).
Well, my objects get stretched - but now OpenGLs flat shading is broken. I just get white shades
I think you mean Gourad shading (flat shading means something different). The thing is: If you're using a vertex shader you must do everthing the fixed function pipeline did. That includes the lighting calculation. Lighthouse3D has a nice tutorial http://www.lighthouse3d.com/tutorials/glsl-tutorial/lighting/ as does Nicol Bolas' http://arcsynthesis.org/gltut/Illumination/Illumination.html