how to use assimp by fixed function pipeline? - opengl

I have learned how to import a .obj with assimp by shaders
However, is it possible to import a .obj with assimp by fixed-function-pipeline.
so that I can program by OpenGL API easily.

It shouldn't change significantly between the two, through assumption you should get your vertex positions, normals and UV coordinates which is independent from opengl.
What will change is that you won't use a VAO/VBO but you will have to send each vertex attribute "by hand"
With
glTexCoord2dv(your uv) glNormal3dv( your normal) glVertex3dv( your vertex)
For each of your face and vertex.
Edit:
The wavefront object format uses only one set of UV coordinates per vertex, so all your textures will use the same UV map. If you have textures that use multiple UV maps you should look into another format like .fbx . But these stuffs have nothing to do with the fixed/ programmable pipeline. Once the files are imported by assimp you are done. All that changes are the functions used to send the data. Also the material data of an obj file is very limited so all you'll have are the name of the texture used and it's channel. Since materials are heavily linked to your rendering pipeline information will always be lost.

Related

OpenGL render .obj files with multiple material and textures

I'm writing a parser for .obj files with multiple materials and groups (so I'm also parsing usemtl and material files). I can load and render the vertices. How do I deal with different textures and materials?
Do I render each material one by one or having a giant shader to choose ID? And how do I store different textures on the GPU buffer? (Currently I am using GL_TEXTURE_2D_ARRAY but they must have the same size.
So, to handle different materials, each object has the material specifications like ambient_color, diffuse_color, and specular_color. You simply, pass these values as uniform to fragment shader and render the object with different material specs.
Also, you can use 128 textures simultaneously in one fragment shader, so you can render an object with more than texture. But most of the time an object is made of groups and each group has just one texture, so you just need a sampler2D object in fragment shader, just the uniform values which you are passing for the texture will change.
Best way to handle this efficiently is to render the groups with the same texture together, so prevent lots of texture changes.

Converting .obj per-face variables for OpenGL

As far as I know, OpenGL doesn't support per-face attributes [citation needed]. I have decided to use material files of .obj files and have already successfully loaded them into my project. However, I thought that materials were used per-object group and I realized that .obj format can actually use per-face materials. Therefore, a vertex group (or lets say, mesh) can have more than one material for specific faces of it.
I would be able to convert small variables like specular etc. into per vertex but the whole material can vary from face to face; illumination, ambient, specular, texture maps (diffuse normal etc.). It would be easy if the materials were per-mesh, so that I could load them as sub-meshes and attach corresponding materials on them.
How am I going to handle multiple materials for ONE mesh in which the materials are not uniformly distributed among the faces in it?
Firstly, what values do these per-face materials hold? Because, unless you are able to render them in a single pass, then you may as well split them into separate meshes anyway. If using index buffers, then just use a few of those, one for each material. Then you can set uniforms / change shaders for each material type.
The way my renderer works:
iterate through meshes
bind mesh's vertex array object
bind mesh's uniform buffer object
iterate through mesh's materials
use shaders, bind textures, set uniforms...
draw material's index buffer with glDrawElements
Of course, you wouldn't want to change shaders for every material, so if you do need to use multiple shaders rather than just changing uniforms, then you will need to batch them together.
This isn't specific to obj/mtl, but any mesh / material format.

Assimp + COLLADA models with bones = incorrect vertex positions

I'm using Assimp to load COLLADA models created and exported with Blender v2.7, but I noticed a funny issue. Whenever I apply (in Blender) transformations to a mesh in "Object mode" instead of "Edit mode", the resultant transformations apply not to the vertices I read from the Assimp importer data, but to the mParent matrix of the aiNode that contains the mesh.
That's not really a problem since I can read the vertices of the mesh and then multiply them by the aiNode's mParent matrix to obtain the vertices of the mesh in the correct position.
The problem arrives whenever I try to do the same with meshes that have bones. I don't know why, but in this case, the transformations that I have applied in "Object mode" aren't applied neither to the vertices I read directly from the mesh nor to the aiNode's mParent matrix.
Can someone explain to me how to get the correct positions of the vertices of a mesh with bones using Assimp and COLLADA models?
Maybe updating the collada importer/exporter can solve this.

Obj not exporting with texture points

Every time I try to export a obj from some 3d model making program it exports without indices for the texture coordinates. I don't want x//xn y//yn z//zn I want x/xn/u y/yn/v z/zn/w. I've tried blender and maya. Maya doesn't have a option for exporting these. But blender will let you choose whether you want to write normals and texture points. How can I get the texture point indices in there?
Blendr's site mentions that it can export UVs, did you check that? Blender's Wavefront OBJ options

GLSL GPU skinning with 3rd party shader

I have implemented GPU skinning for Collada files using Assimp andx my own OpenGL renderer.
This is working fine.
Now,
My application should allow, 3rd party vertex and fragment shaders to be specified, and should work along with skinning.
An example use case could be that, this foreign shader bends the space about the Y-axis and adds fog to the scene..etc.
Is it possible while using GPU skinning?
Possible? Yes. But not simple.
The least painful way to do this is to take advantage of being able to provide multiple strings to the shader. Make your "skinning shader" a function that returns the camera-space position of the vertex. You might even have multiple variations of this function. One version that returns a position, one that returns a position and normal, and one that returns a position and TBN tangent-space basis matrix.
The user-provided shader would simply call this function to get the camera-space positions/normals. When compiling the shader, simply put your skinning shader string before their shader in the call to glShaderSource.