What are the basic steps of modeling an irregular 3d polyhedron (example "pentagonal hexecontahedron") with GLUT?
What I understand so far is that I need to determine vertices of the object. How?
What's next when I have the vertex list? How do I use the glVertex(..) function to draw the polyhedron?
Your best bet would to make the model in a 3d modeling program, unless you want to figure out all the vertices by hand which would be a pain. Use the vertex data from the saved file and put it into an array that you can either figure out how to read from file, or just make a static array in a header with all the vertex data in.
then you can use vertex arrays to render the model in one swoop http://www.opengl.org/sdk/docs/tutorials/CodeColony/vertexarrays.php
Related
I need help with rendering a .vox model in OpenGL.
The .VOX file format is described here.
Here is an example VOX file reader.
And here is where I come across the problem - how would I go about rendering a .vox model in OpenGL? I know how to render standard .obj models with textures using the Phong reflection model, but how do I handle voxel data? What kind of data should I pass to the shaders? Should I parse the data somehow, to get the index of each individual voxel to parse? How should I create vertices based on voxel data (should I even do that)? Should I pass all the chunks or is there a simple way to filter out those that won't be visible?
I tried searching for information on this topic, but came up empty. What I am trying to accomplish is something like MagicaVoxel Viewer, but much simpler, without all those customizable options and with only a single light source.
I'm not trying to look for a ready solution, but if anyone could even point me in the right direction, I would be very grateful.
After some more searching I decided to render the cubes in two ways:
1) Based on voxel data, I will generate vertices and feed them to the pipeline.
2) Using the geometric shader, I'll emit vertices based on indexes of voxels to render I feed to the pipeline. I'll store the entire model as a 3D texture.
I am using PyOpenGL with PyGame (although I am also trying to copy the game into c++ as well), and I would like to draw some low-poly trees in my game, something like the one in the picture below.
But at the moment I only know how to draw simple flat surfaces and put textures on them (by creating an array of x,y,z coordinates and texture coordinates and using glDrawArrays). Is there a way to make something like the tree below using only opengl (would it involve 3d texture coordinates?) or do I need an external graphics engine?
If I do need a graphics designer, does anyone have any recommendations, and then am I right that I would need to pass the vertices to an array in python and then use that in glDrawElements?
After some point, you cannot handle complex objects by just defining 3D vertexes in Opengl. Instead you need object model that you can include it to your project. Most of the objects models are come with their texture files and texture coordinates included so you don't need to worry about texturing them.
For loading objects into your scene, I suggest you to use assimp library. And after you setup your environment, only thing you have to do is search for free low poly tree models. Here is a webpage that you can find free low poly trees : http://www.loopix-project.com/
If you click on the model viewer in a 3D Modeler (such as blender or max), it will select the vertex that the mouse was over or near. How does it know which one to use efficiently? How can it use a lasso tool or circle tool efficiently? Does it use screen space co-ordinates for the vertices or does it use simple ray tracing?
I am trying to make a simple 3D model tool (for fun) and i can't imagine how a circle tool would work. How can it pick the nearest vertex to the mouse co- ordinates without a sort?
There are a lot of ways to approach this problem.
If you have only several thousands of vertexes, it can be very fast to just iterate over all of them.
If you are just clicking on a vertex (or other object) in one of the views, then you can render the scene into another buffer using a different "color" for each object in the scene. To figure out which object you clicked on, you just have to read the color from that pixel.
In other circumstances, you can store the vertex data in a spatial index such as an octree.
Remember: Blender is open-source, so you can just read the source code if you want to find out how Blender does it.
I have a textured polygon mesh that I plan to be move-able based on the user's various inputs.
For example: the user can move the vertices in various directions. But the number of vertices and the texture coordinates will always be constant.
Is this a good situation to use GL_STATIC_DRAW, or should i use something else, like GL_STREAM_DRAW?
Instead of updating a VBO every time the vertices are moved, I would suggest using transformations. With transformations, you can create a matrix that can translate, rotate, or scale the vertices by simply multiplying the transformation matrix by the position vector. This multiplication can be done on the graphics card with a GLSL shader. Using this method, your vertex buffer would never have to change.
I would suggest reading this article for more information on how to use transformations in OpenGL: https://open.gl/transformations
No, your situation is not a good case to use GL_STATIC_DRAW. As h4lcOn's link suggests you should use dynamic or stream. Though if I understand correctly what you are trying to do I wouldn't even use VBO at all. There will not be much overhead (if any at all) if you push the coordinates every draw call for a simple polygon. Use a VBO in cases when you have a large quantity of polygons or when you make large amount of draw calls with the same vertex data in a single frame.
In my application, I have the shape and dimensions of a complex 3D solid (say a Cylinder Block) taken from user input. I need to construct vertex and index buffers for it.
Since the dimensions are taken from user input, I cannot user Blender or 3D Max to manually create my model. What is the textbook method to dynamically generate such a mesh?
I am looking for something that will generate the triangles given the vertices, edges and holes. Something like TetGen, though TetGen has no way of excluding the triangles which fall on the interior of the solid/mesh.
Sounds like you need to create an array of verticies, and a list of triangles each of which contains a list of 3 indicies into the vertex array. There is no easy way to do this. To draw a box, you need 8 veticies and 12 triangles (2 per side). Some representations will use explicit edge representations too. I suspect this is way more work than you want to do so.....
What you need is a mesh library that can do CSG (composite solid geometry). This way you should be able to specify the dimensions of the block, and then the dimensions of the cylinders and tell it to cut them out for you (CSG difference). All the vertex and triangle management should be done for you. In the end, such a library should be able to export the mesh to some common formats. Only problem here is that I don't know the name of such a library. Something tells me that Blender can actually do all of this if you know how to script it. I also suspect there are 1 or 2 fairly good libraries out there.
Google actually brought me back to StackOverflow with this:
A Good 3D mesh library
You may ultimately need to generate simple meshes programatically and manipulate them with a library if they don't provide functions for creating meshes (they all talk about manipulating a mesh or doing CSG).
It depends a bit on your requirements.
If you don't need to access the mesh after generating, but only need to render it, the fastest option is to create a vertex buffer with glGenBuffers, map it into memory with glMapBuffer, write your data into the buffer, then unmap it with glUnmapBuffer. Drawing will be very fast because all data can be uploaded to video card memory.
If you do need to access the mesh data after generating it, or if you expect to modify it regularly, you might be better off building your vertex data in a regular array and using vertex arrays with glVertexPointer and friends.
You can also use a combination: generate the mesh data in main memory, then memcpy() it into a mapped vertex buffer.
Finally, if by "dimensions" you mean just scaling the entire thing, you can create it offline in any 3D modelling program and use the OpenGL transformations, for example glScale, to apply the dimensions to the mesh while rendering.
I'm not sure if the Marching Cube algorithm would be any help?.