Unexpected transformationMatrix and offset Matrix with Skeletal Animation - c++

What I use:
Assimp to import .fbx files from blender
OpenGL for rendering
glm lib for handling matrices and vectors
I am trying to make skeletal animation work. I dont read the .fbx file directly into the Program but I convert it to a binary at first. I tried to understand how transformationMatrix and offsetMatrix is supposed to work. This is what I understood: In order to later be able to run an animation, we need to find a way to make a moving Bone affect its vertices but also the bones connected to it. So the idea is to use transformation matrices, which describe the coordinate system of a bone or a node and we multiply along these paths to a bone and then multiply by is offsetMatrix to be back in objectSpace. I think by now I tried every possible combination of multiplying these but I always get something wrong. Then I looked at the values a couple of times and to me it is not obvious at all how this should work. Correct me if I am wrong but my expectation is that when in BindPose and using the offsetMatrices and transformationMatrices from the Assimp import I must result with an identity Matrix, because I want my model in bind pose just as without those Matrices. These are the transformation Matrices from all nodes up to the first bone:
The Root Node is identity
Armature mTransformationMatrix:
100 0 0 0
0 -1.6e-05 100 0
0 -100 -1.629e-05 0
0 0 0 1
Bone mTransformationMatrix:
1 0 0 0
0 1.6e-07 -1 0
0 1 -1.6e-07 0
0 0 0 1
I expect those two to result with something like an identity*100 when multiplied.
mOffsetMatrix of the corresponding Bone:
0.38 0 0 0
0 0 -1 0
0 0.28 0 0
0 0 1.67 1
In my opinion this doesnt help me at all. So either my expectaiton to result with an identity matrix are wrong or the offsetMatrix is.
In case you consider it important what my model looks like:
Edit: I forgot to mention: to read in the aiMatrix4x4 I use
static inline glm::mat4 mat4_cast(const aiMatrix4x4& m) { return glm::transpose(glm::make_mat4(&m.a1)); }
But the transformationMatrices I wrote in this report are directly from the Scene so only the offsetMatrix is transposed. But this doesn't change a thing.

I found the problem. The problem is that I also have to transform from meshSpace back to object space. The offset Matrix is the inverse of the tranformation Matrices multiplied and also multiplied with the inverse of the transformation Matrix to the Mesh which is on a different node.

Related

It is some kind of OpenGL in TCL, but what language is this?

This is TCL and OpenGL, but I don't know which language it exactly is, and so cannot find the documentation for it. In particular, I need to understand all the attributes on the OGL line.
global Qu
gl matrixmode projection
gl pushmatrix
gl loadidentity
gl ortho 0 50. 0 50. -1. 1.
gl matrixmode modelview
gl pushmatrix
gl loadidentity
gl color 1 1 1 1
if {$Qu(Speed) >= 30 } {
OGL drawtex sans-bold "Speed 3" -center -pos 25 47 0 -dir 2 0 0 -up 0 2 0
OGL drawtex sans "[format %#.3g $Qu(Speed)]" -center -pos 25 44 0 -dir 3 0 0 -up 0 3 0
}
The function of this code is to display the speed as two lines of text on screen, when speed>=30.
Well the first lines from gl matrixmode projection to gl color 1 1 1 1 are pretty easy OpenGL functions ( deprecated OpenGL actually ). The other line seems self explaining, however you can try to change them slightly to see what effects each parameter has.
I think its as follow ( Only a guess ):
OGL drawtext: Command for drawing a text
sans-bold: Font family or file name.
"Speed 3": Simple text
"[format %#.3g $Qu(Speed)]": Formatted text which inserts the speed into the string.
-center: Text is centered around it's position.
-pos, -dir and -up: Position, direction and up vector

where in a COLLADA .dae file is the information I would require to scale a 3d model for opengl?

The vertices and normals for the object are all stored as floats inside the COLLADA file I can export from google sketchup. I wish to obtain the information to tell me how much each unit for the float would be in meters but can't seem to find it. I see that there is a matrix inside the COLLADA file I included bellow and the 1.968504 value looks suspicious judging by the range of vertices values inside the file and the measurements used inside the sketchup application drawing the model, but It appears to be a translation because apparently COLLAD uses row-major format so the 1.968504 is the x translation . I also noticed there is an element called unit in the xml which may be related but I can't figure out what to use :/?
Ideally I need to know the constant that relates a meter to a unit for the floats in each dimension and I need to scale all the values, which I could probably work out how to do if I understood what information from the file I need :/?
This is the matrix xml element:
<node id="ID2" name="instance_0">
<matrix>1 0 0 1.968504 0 1 0 0 0 0 1 0 0 0 0 1</matrix>
<instance_node url="#ID3" />
</node>
This is the unit xml element:
<unit meter="0.0254" name="inch" />
Those 16 numbers are a 4x4 matrix that combines information about translation, rotation and scale. If m if the matrix, then the scale vector is, using GLM as notation:
glm::vec3 scale = glm::vec3(glm::length(glm::vec3(m[0][0], m[0][1], m[0][2])),
glm::length(glm::vec3(m[1][0], m[1][1], m[1][2])),
glm::length(glm::vec3(m[2][0], m[2][1], m[2][2])));
This is a 3-component vector because scaling can be different in every axis.

How does a projection Matrix work?

I have to write a paper for my A-Levels about 3D-Programming. But I got a serious problem understanding the perspective projection Matrix and I need to fully explain the Matrix in detail. I've searched a lot of websites and youtube videos on this topic but very little even try to answer the question why the Matrix has these values at that place. Based on this http://www.songho.ca/opengl/gl_projectionmatrix.html I was able to find out how the w-row works, but I don't understand the other three.
I decided to use the "simpler" version for symmetric viewports only (right-handed Coord.):
I am very thankful for every attempt to explain the first three rows to me!
The core reason for the matrix is to map the 3D coordinates to a 2D plane and have more distant objects be smaller.
For just this a much simpler matrix suffices (assuming your camera is at origin and looking at the Z axis):
1 0 0 0
0 1 0 0
0 0 0 0
0 0 1 0
After multiplying with this matrix and then renormalizing the w coordinate you have exactly that. Each x,y,z,1 point becomes x/z,y/z,0,1.
However there is no depth information (Z is 0 for all points) so a depth buffer/filter won't work. For that we can a a parameter to the matrix so the depth information remains available:
1 0 0 0
0 1 0 0
0 0 0 1
0 0 1 0
Now the resulting point contains the inverse depth in the Z coordinate. Each x,y,z,1 point becomes x/z,y/z,1/z,1.
The extra parameters are the result of mapping the coordinates into the (-1,-1,-1) - (1,1,1) device box (the bounding box where if you are outside of it the point won't get drawn) using a scale and a translate.

Error control coding for a practical application

I’m doing a project where a device is built to measure the girth of a rubber tree in a rubber plantation.
I need to give an identity to each tree to store the measurements of each tree.
The ID of each tree contains 33bits (in binary). For error detection and correction I’m hoping to encode this 33bit word in to a code word (Using a error control coding technique) and generate a 2D matrix (Color matrix-With red and cyan squares representing 1’s and 0’s). The 2D matrix will represent the coded word. This matrix will be pasted on the trunk of the tree. And a camera (the device) will be used to take the image of the 2D matrix and the code will be decoded then the ID of the tree will be taken.
I’m looking for the best scheme to implement this. I thought of cyclic codes for coding, but since the data word is 33 bits cyclic codes seems to be bit complicated.
Can someone please suggest the best way (at least a good way) to implement this???
Additional info: The image is taken in a forest environment (low light condition), Color matrix is to be used considering the environment.(The bark of the tree is dark so black and white matrix would be not appropriate)
One way to do it is to use 2D-parity check codes. The resulted codeword is a matrix, and it has single error correction (SEC) capability.
Since your information part (tree ID) has 33 bits, you may need to add a few dummy bits to make the codeword a 2D rectangle (say, a 6x6 information part). If a tree's ID is 1010_1010_1010_1010_1010_1010_1010_1010_1, then by adding 3 more 0 we have it as:
1 0 1 0 1 0 | 1
1 0 1 0 1 0 | 1
1 0 1 0 1 0 | 1
1 0 1 0 1 0 | 1
1 0 1 0 1 0 | 1
1 0 1 0 0 0 | 0
—————————————
0 0 0 0 1 0 1
Then you get a (n, k, d) = (49, 36, 3) code, which correct 1 bit errors.

OpenGL glut glTranslate glRotate glScale matrices

i'm looking for an explanation (or an image) of the matrix and how it changes when putting translate, rotate and scale on it... (one cell with sin(angle), and another cell with x coord of translate)
For now, ignore translation, it's a slightly trickier concept than rotation and scale.
The way to think about this is that each matrix defines a change in the basis vectors. Given a standard co-ordinate system, your basis vectors are (1,0,0), (0,1,0) and (0,0,1). For now, I'm just going to assume a 2D system, as the concepts carry through, but it's less work.
I'm also assuming column-major. I can't remember if OpenGL actually uses this though, so check this first, and optionally transpose the matrices if needed.
The basis vectors, as defined before, can be put in matrix form. This simply puts each vector as a column in the matrix. Therefore, to transform from the basis vectors to the basis vectors (i.e. no change), we would use the following matrix. This is also called the "identity matrix", since it doesn't do anything to its input (similar to how *1 is the identity of multiplication).
2D 3D
(1 0) (1 0 0)
(0 1) (0 1 0)
(0 0 1)
I've included the 3D version for completeness sake, but that's as far as I'll be taking 3D.
A scale matrix can be seen as "stretching" the axes. If the axes are twice as large, the intervals on them will be twice as far apart, thus, the contents will be larger. Take this as an example
(2 0)
(0 2)
This will change the basis vectors from (1, 0) and (0, 1) to (2, 0) and (0, 2), thus making the whole shape represented twice as large. Diagrammatically, see below.
Before After
6| 3|
5| |
4| 2|-------|
3| | |
2|--| 1| |
1|__|___________ |_______|______
0 1 2 3 4 5 6 7 0 1 2 3
The same then happens for rotation, although instead, we sue different values, the values for a rotation matrix are as follows:
(cos(x) -sin(x))
(sin(x) cos(x))
This will effectively rotate each axis around the angle x. To really make sense of this, brush up on your trig and assume each column is a new basis vector ;).
Now, translation is a little trickier. For this, we add an extra column at the end of the matrix, which for all other operations just has a 1 on the last row (i.e. it is an identity, of forms). For translation, we fill this in as follows:
(1 0 x)
(0 1 y)
(0 0 1)
This is 3D in a form, but not in the form you will be used to. You can model this as moving the Z basis co-ordinate (and remember, we're working in 2D here!), assuming your model exists at Z=1. This effectively skews the shape, but again, as we're working in 2D, it is flattened so we don't percieve the third dimension. If we were working in 3D here, this would actually be the fourth dimension, as can be seen here:
(1 0 0 x)
(0 1 0 y)
(0 0 1 z)
(0 0 0 1)
Again, the "fourth dimension" isn't seen, but we instead move along it and flatten. It's easier to get your head around it in 2D space first, then try and extrapolate. In 3D space, this fourth dimension vector is called w, so your models implicitly lie at w=1.
Hope this helps!
EDIT: As an aside, this page is what helped me to understand translation matrices. It has some decent diagrams, so hopefully it will be more helpful:
http://www.blancmange.info/notes/maths/vectors/homo/