Assimp + COLLADA models with bones = incorrect vertex positions - c++

I'm using Assimp to load COLLADA models created and exported with Blender v2.7, but I noticed a funny issue. Whenever I apply (in Blender) transformations to a mesh in "Object mode" instead of "Edit mode", the resultant transformations apply not to the vertices I read from the Assimp importer data, but to the mParent matrix of the aiNode that contains the mesh.
That's not really a problem since I can read the vertices of the mesh and then multiply them by the aiNode's mParent matrix to obtain the vertices of the mesh in the correct position.
The problem arrives whenever I try to do the same with meshes that have bones. I don't know why, but in this case, the transformations that I have applied in "Object mode" aren't applied neither to the vertices I read directly from the mesh nor to the aiNode's mParent matrix.
Can someone explain to me how to get the correct positions of the vertices of a mesh with bones using Assimp and COLLADA models?

Maybe updating the collada importer/exporter can solve this.

Related

how to use assimp by fixed function pipeline?

I have learned how to import a .obj with assimp by shaders
However, is it possible to import a .obj with assimp by fixed-function-pipeline.
so that I can program by OpenGL API easily.
It shouldn't change significantly between the two, through assumption you should get your vertex positions, normals and UV coordinates which is independent from opengl.
What will change is that you won't use a VAO/VBO but you will have to send each vertex attribute "by hand"
With
glTexCoord2dv(your uv) glNormal3dv( your normal) glVertex3dv( your vertex)
For each of your face and vertex.
Edit:
The wavefront object format uses only one set of UV coordinates per vertex, so all your textures will use the same UV map. If you have textures that use multiple UV maps you should look into another format like .fbx . But these stuffs have nothing to do with the fixed/ programmable pipeline. Once the files are imported by assimp you are done. All that changes are the functions used to send the data. Also the material data of an obj file is very limited so all you'll have are the name of the texture used and it's channel. Since materials are heavily linked to your rendering pipeline information will always be lost.

OpenGL - texturing mapping 3D object

I have model of skull loaded from .obj file based on this tutorial . As long as I understand texture mapping of cube (make triangle on texture in range of [0,1], select one of six side, select triangle of two triangles on this side and map it with your triangle from texture), I have problem with thinking for any solution to texture mapping my skull. There are few thousands of triangles on it and I think that texture mapping them manually is more than wrong.
Is there any solution for this problem? I'll appreciate any piece of code since it may tell me more than just description of solution.
You can generate your UV coordinates automatically, but this will probably produce badly looking ouput except for very simple textures.
For detailed textures that have eyes, ears, etc., you need to crate your UV coordinates by hand in some 3d modeling tool like is Blender 3d, 3DS Max etc... There is a lot of tutorials all over the internet how to do that. (https://www.youtube.com/watch?v=eCGGe4jLo3M)

Placing Camera at different positions in world space

Is there a way to manipulate the field of view of camera when the camera is at 2 different positions in world space?
For example, In the first position, multiple mesh parts are transformed in different directions across the origin(where the camera looks at) until they form one mesh. This is done by calling glm::translate(), glm::rotate(), e.t.c before loading the vertices in the vbo.
In the second position, I want to transform the whole mesh (from above). Since I already loaded everything needed into the vbo and my models are drawn, I can't draw my new transformed mesh. Is there a way to draw my new transformed mesh without loading vertices into the vbo again?
And, if I have to load my vbo again, how do I go about it, since loading the vbo is dependent on how many parts the mesh is divided into.
Loading the vertex data into VBOs and transforming them has nothing to do with each other - stuff like glm::translate() or glm::rotate will not have any influence on the buffer contents - the only change some matrix. As long as you do not apply some transformation do you vertex data when you upload it to the buffer, no one else does. Typicalle, those transformation matrices are used when drawing the objects (by the vertex shader, on the GPU), so one can have moving objects and a moving camera without having to respecify the geometry data.
So in your case, it will be enough to just change the projection and view matrices and draw the obnjects again with those new matrices applied.

Using a Shader to animate a mesh

I have been working on my low level OpenGL understanding, and I've finally come to how to animate 3D models. Nowhere I look tells me how to do Skeletal animation. most things use some kind of 3D engine and just say "Load the Skeleton" or "Apply the Animation" but not how to load a skeleton, or how to actually move the vertices.
I'm assuming each bone has a 4x4 Matrix of the Translation/Rotation/Scale for the vertices its attached too that way when the bone is moved the vertices attached also move by the same amount.
for skeletal animation I was guessing that you would pass the Bone(s) to the shader, that way in the vertex shader I move the current vertex before it goes to the fragment shader. If I have a keyframed animation I send the current bone and the new bone to the shader along with the current time between frames and interpolate the points between bones based on how much time there is between keyframes.
Is this the correct way to animate a mesh? or is there a better way
Well - the method of animation depends on the format, and the data that's written in it. Some formats supply you in vectors, some use matrices. I gotta admit I came to this site to ask a similar question, but I've specified the format (was using *.x files, you can check the topic), and I got an answer.
You're idea of the subject is correct. If you want a sample implementation, you can find one on the OpenGL wiki.

Texture mapping with cylinder intermediate surface manually

I'm working on a scanline rendering for a class project. The renderer works so far, it reads in a model (using the utah teapot mostly), computes vertex/surface normals, and can do flat and phong shading. I'm now working on adding texture mapping, which is where I'm running into problems (I cannot use any OpenGL methods other than actually drawing the points on the screen).
So, I read in a texture into my app and have a 2D array of RGB values. I know that the concept is to map the texture from 2D texture space to a simple 3D object (in my case, a cylinder). I then now that you then map the intermediate surface onto the object surface.
However, I don't actually know how to do those things :). I've found some formulas as to mapping a texture to a cylinder, but they always seem to leave details out such as which values to use. I also then don't know how to take the vertex coordinate of my object and get the cylinder value for that point. There's some other StackOverflow posts about mapping to a cylinder, but they 1) deal with newer OpenGL with shaders and such and 2) don't deal with intermediate surfaces, so I'm not sure how to translate the knowledge from them.
So, any help on pseudo code for mapping a texture onto a 3D object using a cylinder as an intermediate surface would be greatly appreciated.
You keep using the phrase "intermediate surface", which does not describe the process correctly, yet hints at what you have in your head.
Basically, you're asking for a way to map every point on the teapot's surface onto a cylinder (assuming that the texture will be "wrapped" on the cylinder).
Just convert your surface point into cylindrical coordinates (r, theta, height), then use theta as u and height as v (texcoords).
This is what you are trying to achieve: