I am using a new 3D scanner which uses stereophotogrammetry to extract images at different orientations and then stitch them together to create a 3D model. The model looks great. However, the texture in the models (as referred to in the .mtl file) points to several images(.jpg) corresponding to the orientation. (Every capture has its own texture and they are referenced in the .mtl file). However, my application can only read 3D scans (.obj's) with only one texture referenced in them. Can you advise a way to combine the textures to a single triangle map or UV map?
The textures and the final models are added for your reference.
Please advise if you need any more information.
Related
I need help with rendering a .vox model in OpenGL.
The .VOX file format is described here.
Here is an example VOX file reader.
And here is where I come across the problem - how would I go about rendering a .vox model in OpenGL? I know how to render standard .obj models with textures using the Phong reflection model, but how do I handle voxel data? What kind of data should I pass to the shaders? Should I parse the data somehow, to get the index of each individual voxel to parse? How should I create vertices based on voxel data (should I even do that)? Should I pass all the chunks or is there a simple way to filter out those that won't be visible?
I tried searching for information on this topic, but came up empty. What I am trying to accomplish is something like MagicaVoxel Viewer, but much simpler, without all those customizable options and with only a single light source.
I'm not trying to look for a ready solution, but if anyone could even point me in the right direction, I would be very grateful.
After some more searching I decided to render the cubes in two ways:
1) Based on voxel data, I will generate vertices and feed them to the pipeline.
2) Using the geometric shader, I'll emit vertices based on indexes of voxels to render I feed to the pipeline. I'll store the entire model as a 3D texture.
So I've created a 3D model in blender,exported it as an obj and have imported it onto C++/OpenGL. The model loads perfectly although it's lost all of it's colouring and texture. It's just a basic white model.. Is there any way of fixing this or can you not have the model imported with the textures you have to redo them in OpenGl?
an .obj file does not contain the texture itself: only the texture coordinates per vertex.
you will need to load and bind the texture yourself separately from loading from the obj file.
Other file formats can have a texture embedded but loading models from files is not within the scope of the openGL API.
It's true that obj files don't contain textures or material data, but they're commonly paired with mtl files. The obj references the mtl file to use with the mtllib directive and chooses materials for subsequent faces with usemtl.
See: http://en.wikipedia.org/wiki/Wavefront_.obj_file#Material_template_library
However, the mtl file only contains colours and texture file names, not the actual texture data. You'll have to look into loading textures separately and use it to load the texture referenced in the mtl file. Then create the OpenGL texture and draw you object with it bound, together with the texture coordinates in the obj file.
In blender, make sure Write Materials is checked when exporting the obj. Also check the relative paths to the textures are appropriate (just open the mtl file in a text editor). As a side note, Include Normals is annoyingly is unchecked by default.
So, your obj file contains:
Vertex positions and possibly normals and texture coordinates too.
Vertex connectivity, or faces, which may actually be n-gons and you need to triangulate.
References to the material file, if there is one.
The mtl file contains:
Many material definitions
Each identified by a name
Containing colour for ambient, diffuse, specular etc.
Also containing texture map references (file names) and could be png, jpg, whatever.
I want to animate a model (for example a human, walking) in OpenGL. I know there is stuff like skeleton-animation (with tricky math), but what about this....
Create a model in Blender
Create a skeleton for that model in Blender
Now do a walking animation in Blender with that model and skeleton
Take some "keyFrames" of that animation and export every "keyFrame" as a single model
(for example as obj file)
Make an OBJ file loader for OpenGL (to get vertex, texture, normal and face data)
Use a VBO to draw that animated model in OpenGL (and get some tricky ideas how to change the current "keyFrame"/model in the VBO ... perhaps something with glMapBufferRange
Ok, I know this idea is only a little script, but is it worth looking into further?
What is a good concept to change the "keyFrame"/models in the VBO?
I know that memory problem, but with small models (and not too much animations) it could be done, I think.
The method you are referring to of animating between static keyframes was very popular in early 3D games (quake, etc) and is now often referred to as "blend shape" or "morph target" animation.
I would suggest implementing it slightly differently then you described. Instead of exporting a model for every possible frame of animation. Export models only at "keyframes" and interpolate the vertex positions. This will allow much smoother playback with significantly less memory usage.
There are various implementation options:
Create a dynamic/streaming VBO. Each frame find the previous and next keyframe model. Calculate the interpolated
model between them and upload it to the VBO.
Create a static VBO containing the mesh data from all frames and an additional "next position" or "displacement" attribute at each vertex.
Use the range options
on glDrawArrays to select the current frame.
Interpolate in the vertex shader between position and next position.
You can actually setup blender to export every frame of a scene as an OBJ. A custom tool could then compile these files into a nice animation format.
Read More:
http://en.wikipedia.org/wiki/Morph_target_animation
http://en.wikipedia.org/wiki/MD2_(file_format)
http://tfc.duke.free.fr/coding/md2-specs-en.html
I have some model in Blender. I'd like to:
Connect a few different textures into one and save it as bitmap
Make UV mapping for these connected textures
I need to solve this problem for textured models in OpenGL. I have data structure which giving me possibility to bind one texture into one model, so I'd like to have one texture per one model. I'm aware of fact that I can use Texture GL_TEXTURE_xD_ARRAY, but I don't want to complicate my project. I know how to do simple UV mapping in Blender.
My questions:
Can I do 1. and 2. phases exclusively in Blender?
Is Blender Bake technique is what I'm searching for?
Is there some tutorials shows how to do it? (for this one specific problem)
Maybe somebody advise me another Blender technique (or OpenGL
solution)
Connect a few different textures into one and save it as bitmap
Make UV mapping for these connected textures
You mean generating a texture atlas?
Can I do 1. and 2. phases exclusively in Blender?
No. But it would be surely a well received add-in.
Is Blender Bake technique is what I'm searching for?
No. Blender Bake generates texture contents using the rendering process. For example you might have a texture on a static object into which you bake global illumination; then, instead of recalculating GI for each and every frame in a flythrough, the texture is used as source for the illumination terms (it acts like a cache). Other applications is generating textures for the game engine, from Blender's procedural materials.
Maybe somebody advise me another Blender technique (or OpenGL solution)
I think a texture array would be really the best solution, as it also won't make problems for wrapped/repeated textures.
Another possibility is to use projection painting. An object in blender can have multiple uvmaps, if importing it doesn't create each uvmap then you may need to align each one by hand. Then you create a new uvmap that lays the entire model onto one image.
In Texture painting mode you can use projection painting to use the material from one uvmap as the paint brush for painting onto the new image.
What is the most efficient way to identify the vertices that are visible from a particular viewpoint?
I have a scene composed of several 3D models. I want to attach an identifier to each vertex (ModelID, VertexID) then generate 2D images from various viewpoints and for each image generate a list of the visible vertices identifiers (essentially this is for an image processing application).
Initially I thought to perform a dot product between a vertex normal and the camera view vector to figure out if the vertex is facing the camera or not, however if the model is occluded by another object this test would not work.
Thanks in advance
Disable all lighting/texturing
Render your geometry (GL_TRIANGLES) to populate Z-buffer
Render your geometry again (GL_POINTS), selecting a different RGB color for each vertex, which maps to your model/vertex IDs
Read back framebuffer and scan for the colors you used earlier, mapping back to your model/vertex IDs.
Not very fast, but it should work.