Obj not exporting with texture points - c++

Every time I try to export a obj from some 3d model making program it exports without indices for the texture coordinates. I don't want x//xn y//yn z//zn I want x/xn/u y/yn/v z/zn/w. I've tried blender and maya. Maya doesn't have a option for exporting these. But blender will let you choose whether you want to write normals and texture points. How can I get the texture point indices in there?

Blendr's site mentions that it can export UVs, did you check that? Blender's Wavefront OBJ options

Related

how to use assimp by fixed function pipeline?

I have learned how to import a .obj with assimp by shaders
However, is it possible to import a .obj with assimp by fixed-function-pipeline.
so that I can program by OpenGL API easily.
It shouldn't change significantly between the two, through assumption you should get your vertex positions, normals and UV coordinates which is independent from opengl.
What will change is that you won't use a VAO/VBO but you will have to send each vertex attribute "by hand"
With
glTexCoord2dv(your uv) glNormal3dv( your normal) glVertex3dv( your vertex)
For each of your face and vertex.
Edit:
The wavefront object format uses only one set of UV coordinates per vertex, so all your textures will use the same UV map. If you have textures that use multiple UV maps you should look into another format like .fbx . But these stuffs have nothing to do with the fixed/ programmable pipeline. Once the files are imported by assimp you are done. All that changes are the functions used to send the data. Also the material data of an obj file is very limited so all you'll have are the name of the texture used and it's channel. Since materials are heavily linked to your rendering pipeline information will always be lost.

3d editor with quads polygons. Assimp

Now I work on .obj loader for my 3d editor and plan to build it on Assimp. In my editor meshes will have a quad wireframe above triangulated polygons and have possibility to take both triangles forming a poltgon. But I know Assimp rebuild data for Opengl-ready and don't let to use quads. In my plan to stay data as .obj (quads) and do not triangulate it. If I remove aiProcess_Triangulate my render will corrupt and it doesn't render correctly. Which is a best way to stay data as quads without duplicate data with possibility to interact with it, and to prepare it for rendering? Can Assimp provide this option? Maybe only one way is to make loader myself?
depends of what you mean by load. GL_QUADS removed in 3.1+ and renderer wouldn't recognise it,but for scene building it's still useful.I can say that only trouble with own obj loader i'd have for now is sscanf and float,coz sscanf wanna no dot but comma delimeter. https://rocketgit.com/user/bowler17/gl/source/tree/branch/wrench

Helix toolkit: Importing OBJ file gives odd UV results in diffuse texture map

The link below is a 7zip package containing the blender scene, exported obj file, material and textures. When I load the obj in MeshLab it looks great although the normal bump map does not appear to work in meshlab, the diffuse texture is perfect. When I load the obj in the helix3d toolkit the results I get are less than perfect the UV mapping appears to be almost correct but then is completely wrong in other places.
I checked all the UV coords are in 0-1 range.
obj export 7zip package
Also does normal/bump mapping work correctly in Helix3D viewport?
I solved this, actually its an issue with blender, for some odd reason creating a new scene in blender re-importing just the geometry and then re UV and set-up a new material with new textures then export and the mesh appears fine.
Not sure what was so wrong with it if meshlab could load it but its resolved.

Assimp + COLLADA models with bones = incorrect vertex positions

I'm using Assimp to load COLLADA models created and exported with Blender v2.7, but I noticed a funny issue. Whenever I apply (in Blender) transformations to a mesh in "Object mode" instead of "Edit mode", the resultant transformations apply not to the vertices I read from the Assimp importer data, but to the mParent matrix of the aiNode that contains the mesh.
That's not really a problem since I can read the vertices of the mesh and then multiply them by the aiNode's mParent matrix to obtain the vertices of the mesh in the correct position.
The problem arrives whenever I try to do the same with meshes that have bones. I don't know why, but in this case, the transformations that I have applied in "Object mode" aren't applied neither to the vertices I read directly from the mesh nor to the aiNode's mParent matrix.
Can someone explain to me how to get the correct positions of the vertices of a mesh with bones using Assimp and COLLADA models?
Maybe updating the collada importer/exporter can solve this.

How to animate a 3d model (mesh) in OpenGL?

I want to animate a model (for example a human, walking) in OpenGL. I know there is stuff like skeleton-animation (with tricky math), but what about this....
Create a model in Blender
Create a skeleton for that model in Blender
Now do a walking animation in Blender with that model and skeleton
Take some "keyFrames" of that animation and export every "keyFrame" as a single model
(for example as obj file)
Make an OBJ file loader for OpenGL (to get vertex, texture, normal and face data)
Use a VBO to draw that animated model in OpenGL (and get some tricky ideas how to change the current "keyFrame"/model in the VBO ... perhaps something with glMapBufferRange
Ok, I know this idea is only a little script, but is it worth looking into further?
What is a good concept to change the "keyFrame"/models in the VBO?
I know that memory problem, but with small models (and not too much animations) it could be done, I think.
The method you are referring to of animating between static keyframes was very popular in early 3D games (quake, etc) and is now often referred to as "blend shape" or "morph target" animation.
I would suggest implementing it slightly differently then you described. Instead of exporting a model for every possible frame of animation. Export models only at "keyframes" and interpolate the vertex positions. This will allow much smoother playback with significantly less memory usage.
There are various implementation options:
Create a dynamic/streaming VBO. Each frame find the previous and next keyframe model. Calculate the interpolated
model between them and upload it to the VBO.
Create a static VBO containing the mesh data from all frames and an additional "next position" or "displacement" attribute at each vertex.
Use the range options
on glDrawArrays to select the current frame.
Interpolate in the vertex shader between position and next position.
You can actually setup blender to export every frame of a scene as an OBJ. A custom tool could then compile these files into a nice animation format.
Read More:
http://en.wikipedia.org/wiki/Morph_target_animation
http://en.wikipedia.org/wiki/MD2_(file_format)
http://tfc.duke.free.fr/coding/md2-specs-en.html