How to render a .vox voxel model in OpenGL? - c++

I need help with rendering a .vox model in OpenGL.
The .VOX file format is described here.
Here is an example VOX file reader.
And here is where I come across the problem - how would I go about rendering a .vox model in OpenGL? I know how to render standard .obj models with textures using the Phong reflection model, but how do I handle voxel data? What kind of data should I pass to the shaders? Should I parse the data somehow, to get the index of each individual voxel to parse? How should I create vertices based on voxel data (should I even do that)? Should I pass all the chunks or is there a simple way to filter out those that won't be visible?
I tried searching for information on this topic, but came up empty. What I am trying to accomplish is something like MagicaVoxel Viewer, but much simpler, without all those customizable options and with only a single light source.
I'm not trying to look for a ready solution, but if anyone could even point me in the right direction, I would be very grateful.

After some more searching I decided to render the cubes in two ways:
1) Based on voxel data, I will generate vertices and feed them to the pipeline.
2) Using the geometric shader, I'll emit vertices based on indexes of voxels to render I feed to the pipeline. I'll store the entire model as a 3D texture.

Related

how does a 3D modeler (such as blender) pick the vertex that you click on?

If you click on the model viewer in a 3D Modeler (such as blender or max), it will select the vertex that the mouse was over or near. How does it know which one to use efficiently? How can it use a lasso tool or circle tool efficiently? Does it use screen space co-ordinates for the vertices or does it use simple ray tracing?
I am trying to make a simple 3D model tool (for fun) and i can't imagine how a circle tool would work. How can it pick the nearest vertex to the mouse co- ordinates without a sort?
There are a lot of ways to approach this problem.
If you have only several thousands of vertexes, it can be very fast to just iterate over all of them.
If you are just clicking on a vertex (or other object) in one of the views, then you can render the scene into another buffer using a different "color" for each object in the scene. To figure out which object you clicked on, you just have to read the color from that pixel.
In other circumstances, you can store the vertex data in a spatial index such as an octree.
Remember: Blender is open-source, so you can just read the source code if you want to find out how Blender does it.

How to animate a 3d model (mesh) in OpenGL?

I want to animate a model (for example a human, walking) in OpenGL. I know there is stuff like skeleton-animation (with tricky math), but what about this....
Create a model in Blender
Create a skeleton for that model in Blender
Now do a walking animation in Blender with that model and skeleton
Take some "keyFrames" of that animation and export every "keyFrame" as a single model
(for example as obj file)
Make an OBJ file loader for OpenGL (to get vertex, texture, normal and face data)
Use a VBO to draw that animated model in OpenGL (and get some tricky ideas how to change the current "keyFrame"/model in the VBO ... perhaps something with glMapBufferRange
Ok, I know this idea is only a little script, but is it worth looking into further?
What is a good concept to change the "keyFrame"/models in the VBO?
I know that memory problem, but with small models (and not too much animations) it could be done, I think.
The method you are referring to of animating between static keyframes was very popular in early 3D games (quake, etc) and is now often referred to as "blend shape" or "morph target" animation.
I would suggest implementing it slightly differently then you described. Instead of exporting a model for every possible frame of animation. Export models only at "keyframes" and interpolate the vertex positions. This will allow much smoother playback with significantly less memory usage.
There are various implementation options:
Create a dynamic/streaming VBO. Each frame find the previous and next keyframe model. Calculate the interpolated
model between them and upload it to the VBO.
Create a static VBO containing the mesh data from all frames and an additional "next position" or "displacement" attribute at each vertex.
Use the range options
on glDrawArrays to select the current frame.
Interpolate in the vertex shader between position and next position.
You can actually setup blender to export every frame of a scene as an OBJ. A custom tool could then compile these files into a nice animation format.
Read More:
http://en.wikipedia.org/wiki/Morph_target_animation
http://en.wikipedia.org/wiki/MD2_(file_format)
http://tfc.duke.free.fr/coding/md2-specs-en.html

Merging a Sphere and Cylinder

I want to render a spring using spheres and cylinders. Each Cylinder has two Spheres at each end and all the cylinders are placed along the spring centre line. I could achieve this .. and rendering is good. I am presently doing it using gluSphere and gluCylinder.
Now when I look at the performance its not good its very slow. So I want to know if the following are possible :
Is it possible that I combine the surfaces of the spheres and cylinders and render only the outer hull but not the inner covered parts of the sphere ... ?
I also read about VBOs .. is it possible to use gluSphere and gluCylinder with VBOs .. ?
I cannot use a display list because the properties of the spring keep changing ... !
Can any one suggest a better suggestion?
You might want to reconsider the way you are drawing springs. In my opinion there are two valid approaches.
Load a spring model using Assimp or some other model loading software that is easily integrated with OpenGL. Free 3D models can be found at Turbo Squid or through Google's 3D warehouse (while in Google Sketch-Up).
Draw the object purely in OpenGL. The idiomatic way to draw this kind of object using the post fixed function OpenGL pipeline is by drawing volumetric 3D lines. The more lines you draw the more curvature you can give to your spring at the expense of rendering time.
For drawing springs I would recommend that you define a set of points (with adjacency) that define the shape of your spring and draw these points with a primitive type of GL_LINE_STRIP_ADJACENCY. Then, in the shader program use a geometry shader to expand this pixel-bound line strip into a set of volumetric 3D lines composed of triangle strips.
This blog post gives an excellent description of the technique.
Your best bet would probably be to take a quick tutorial in any 3D modeling software (Blender comes to mind) and then model your spring in its rest pose using CSG operations.
This approach not only rids you of redundant primitives but also makes it very easy to use your model with VBOs. All you have to do is to parse the output file of Blender (easiest would be .obj), retrieving arrays filled with vertex data (positions, normals, possibly texture coordinates).
Lastly, to "animate" your spring, you can use the vertex shader. You just have to pass it another uniform describing how much the spring is deformed and do the rest of the transformation there.

What are are the required steps of modeling an irregular 3d polyhedron

What are the basic steps of modeling an irregular 3d polyhedron (example "pentagonal hexecontahedron") with GLUT?
What I understand so far is that I need to determine vertices of the object. How?
What's next when I have the vertex list? How do I use the glVertex(..) function to draw the polyhedron?
Your best bet would to make the model in a 3d modeling program, unless you want to figure out all the vertices by hand which would be a pain. Use the vertex data from the saved file and put it into an array that you can either figure out how to read from file, or just make a static array in a header with all the vertex data in.
then you can use vertex arrays to render the model in one swoop http://www.opengl.org/sdk/docs/tutorials/CodeColony/vertexarrays.php

rendered 3D Scene to point cloud

Is there a way to extract a point cloud from a rendered 3D Scene (using OPENGL)?
in Detail:
The input should be a rendered 3D Scene.
The output should be e.g a three dimensional array with vertices(x,y,z).
Mission possible or impossible?
Render your scene using an orthographic view so that all of it fits on screen at once.
Use a g-buffer (search for this term or "fat pixel" or "deferred rendering") to capture
(X,Y,Z, R, G, B, A) at each sample point in the framebuffer.
Read back your framebuffer and put the (X,Y,Z,R,G,B,A) tuple at each sample point in a
linear array.
You now have a point cloud sampled from your conventional geometry using OpenGL. Apart from the readback from the GPU to the host, this will be very fast.
Going further with this:
Use depth peeling (search for this term) to generate samples on surfaces that are not
nearest to the camera.
Repeat the rendering from several viewpoints (or equivalently for several rotations
of the scene) to be sure of capturing fragments from a the nooks and crannies of the
scene and append the points generated from each pass into one big linear array.
I think you should take your input data and manually multiply it by your transformation and modelview matrices. No need to use OpenGL for that, just some vector/matrices math.
If I understand correctly, you want to deconstruct a final rendering (2D) of a 3D scene. In general, there is no capability built-in to OpenGL that does this.
There are however many papers describing approaches to analyzing a 2D image to generate a 3D representation. This is for example what the Microsoft Kinect does to some extent. Look at the papers presented at previous editions of SIGGRAPH for a starting point. Many implementations probably make use of the GPU (OpenGL, DirectX, CUDA, etc.) to do their magic, but that's about it. For example, edge-detection filters to identify the visible edges of objects and histogram functions can run on the GPU.
Depending on your application domain, you might be in for something near impossible or there might be a shortcut you can use to identify shapes and vertices.
edit
I think you might have a misunderstanding of how OpenGL rendering works. The application produces and sends to OpenGL the vertices of triangles forming polygons and 3d objects. OpenGL then rasterizes (i.e. converts to pixels) these objects to form a 2d rendering of the 3d scene from a particular point of view with a particular field of view. When you say you want to retrieve a "point cloud" of the vertices, it's hard to understand what you want since you are responsible for producing these vertices in the first place!