I know that GPU is efficient for rendering triangles. GPU can't render NURBS surface directly. From the link NURBS in the OpenGL Graphics Pipeline, we know that. To render NURBS, triangulation is required. I don't know where triangulation operation occurs. Is it performed by CAD modeling software or GPU? I guess this is done by CAD modeling software. So the triangulation functionality of NURBS surface is one core task for any CAD modeling software? Is this task very complex or is solved almost completely by academic researchers?
Why do I have this question? I find that NURBS surfaces are smooth in CAD modeling software? However if I convert the NURBS surface to triangle mesh first and open it by MeshLab. In MeshLab, some regions are not smooth. I hope to know which part in the pipeline causes the problem.
There are several possible reasons for seeing different rendering images between the CAD Modeling software you used and MeshLab:
When the CAD software renders the NURBS surface, the resolution of the triangle mesh it generates internally typically will be dependent on the zoom level. When you used a different software (or even a command in the same CAD software) to convert the NURBS surface to a triangle mesh, there is no guarantee that you are getting the same triangle mesh.
When CAD software renders a NURBS surface, not only it generates a triangle mesh, the surface normal at each triangle vertex is also computed from the NURBS surface and passed to the rendering engine. When MeshLab renders the triangle mesh which probably will not contain any vertex normal data, it will have to compute the vertex normal data by its own algorithm (typically inferred from all the triangles sharing the same vertex). So, different vertex normal data will lead to different rendered images.
The CAD software you used and MeshLab could use totally different shading techniques.
All in all, these are just some possible reasons. If you can provide actual images, it will be helpful in further narrowing down the real cause.
Typically it's done on the CPU, by the CAD software. If you could post screen shots of the difference you're seeing, it would be easier to diagnose the cause of the problem. It might just be using face normals or something.
Related
I want to know the techniques used to render complex surfaces that can't be represented by a mathematical equation (like a car, ) in OpenGL. Do we create it by combining so many basic elements (sphere, cone, ...)? Or there are some other methods?
I am about to start creating an app that will render a 3D car and want to know where to start.
You can use a third-party tool, such as Blender to create models that can be exported and then rendered in OpenGL. These models are usually composed of triangles, but drawn with 3D tools analogous to 2D's pen and paper.
Here is a tutorial on the subject: OpenGL Model Tutorial
Yes, OpenGL produces images by drawing many basic elements (known as "primitives").
The most commonly used primitives are triangles and quadrilaterals (or "quads", which are commonly implemented as two triangles). (There are also provisions for drawing line and point primitives, but these are not typically used for drawing photorealistic surfaces.)
Complex surfaces are approximated with a mesh of triangles or quads. Hidden surface removal is typically done by using a depth map: primitives drawn closer to the camera inhibit and override more distant primitives on a per-pixel basis.
In order to reduce the tessellation level necessary to produce a good image, OpenGL supports interpolation of a (fictitious) tangent plane between triangle (and quad) corners. This cheap approximation, called a "shading normal", practically eliminates the faceted appearance that would otherwise mar a continuous surface approximated with a modest number of primitives.
It is very hard to design you object from scratch. OpenGL is polygon rendering machine. Usualy objects are represented by verteces for their 3D position and indices for how they conect obj file. You can find some obj files that represent a car for example or anything else and to render it. Also there is a alternative method for designing complex models using patches. Patches are used in CAD programs for better control over the model patches. To make more complex model you conect many patches together.
I'm curious about how NURBS are rendered in GPU's / the OpenGL graphics pipeline. I understand there are various calls within OpenGL and GLUT for easily rendering NURBS objects from a coding perspective using glMap and glMapGrid, but what I don't get is the process OpenGL goes through to do this. The idea behind NURBS is using curves to define surfaces, whereas the graphics pipeline appears to be build around triangle rasterization and triangle meshes, whereas NURBS are based around Bezier Curves, which are curved.
So how are NURBS actually rendered, from a (high-level) pipeline perspective?
The simple answer, is that they are not dealt with in the OpenGL pipeline, but must be converted to something that the GL pipeline can process. The general approach would probably be to first convert to a primitive a little more real-time friendly, such as bezier patches, and then tesselate these at runtime into triangles.
Tessellation could be regular, mapping a grid onto the patch, or could be based on curvature, subdividing the patch more where there is higher variance. Either way the surface is only truly evaluated at some vertices, and rendered as flat polygons (though shaders can be used to create appropriately smoothly varying normals, etc.)
glMap() et-al (which were previously used to help render bezier patches, etc.) are deprecated and no longer present in the modern OpenGL API. Nowadays you would use shaders to deal with tessellation.
I've been using OpenGL since some time now for making 3D applications, but I never really understood the use of the GL_POINT and GL_LINES primitive drawing types for 3D games in the production phase.
(Where) are point and line primitives in OpenGL still used in modern games?
You know, OpenGL is not just for games and there are other kind of programs than just games. Think CAD programs, or map editors, where wireframes are still very usefull.
GL_POINTS are used in games for point sprites (either via the pointsprite functionality or by generating a quad from a point in the geometry shader) both for "sparkle" effects and volumetric clouds.
They are also used in some special algorithms just when, well... when points are needed. Such as in building histograms in the geometry shader as by the chapter in one of the later GPU Gems books. Or, for GPU instance culling via transform feedback.
GL_LINES have little use in games (mostly useful for CAD or modelling apps). Besides not being needed often, if they are needed, you will normally want lines with a thickness greater than 1, which is not well supported (read as: fast) on all implementations.
In such a case, one usually draws thick lines with triangle strips.
Who ever said those primitives were used in modern games?
GL_LINES is critical for wireframe views in 3D modeling tools.
(Where) are point and line primitives in OpenGL still used in modern games?
Where do you want them to be used?
Under standard methods, points can be used to build point sprites, which are 2D flatcards that always face the camera and are of a particular size. They are always square in window-space. Sadly, the OpenGL specification makes using them somewhat dubious, as point sprites are clipped based on the center of the point, not the size of the two triangles that are used to render it.
Lines are perfectly reasonable for line drawing. Once upon a time, lines weren't available in consumer hardware, but they have been around for many years now. Of course, antialiased line rendering (GL_LINE_SMOOTH) is another matter.
More importantly is the interaction of these things with geometry shaders. You can convert points into a quad. Or a triangle. Or whatever you want, really. Each "point" is just an execution of the geometry shader. You can have points which contain the position and radius of a sphere, and the geometry shader can output a window-aligned quad that is the appropriate size for the fragment shader to do some raytracing logic on it.
GL_POINTS just means "one vertex per geometry shader". GL_LINES means "two vertices per geometry shader." How you use it is up to you.
I'd say for debugging purposes, but that is just from my own perspective.
Some primitives can be used in areas where you don't think they can be applied, such as a particle system.
I agree with Pompe de velo about lines being useful for debugging. They can be useful when debugging AI and collision detection algorithms so that you can visualize the data that is being used by the AI or collision detection. Some example uses for AI, the lines can be used to show AI paths or path meshes. Lines can be used to show steering data that the AI is using. Lines can be used to show what an AI is aiming at. The data that is shown can be displayed in text form but sometimes it is easier to see it in visual form.
In most cases particles are based on GL_POINT, considering that there can be a huge number of particles on the screen it would be very expensive to use 4 vertices per particle, so GL_POINT solves this problem
GL_LINES good for debugging purposes, wireframe mode can be used in various cases. As mentioned above - in CAD apps, but if you're interesed in gamedev use - it's good for a scene editor.
In terms of collision detection, they come in handy when you want to visualize bounding volumes(boxes,spheres,k-dops) and contact manifolds in wireframe mode. Setting the colour of these primitives based on the status of collisions as well is incredibly useful.
Hi
How can I extrude cut (like solidworks) a 3D model?
Is there an easy way or I have to do some complex calculation?
What you want to do is part of a discipline called Constructive Solid Geometry (CSG) and it's about one of the trickiest subjects of 3D graphics and processing. There are several approaches how to tackle the problem:
If you're just interested in rendering CSG in a raytracer things get actually quite easy: At every ray/surface intersection you increment/decrement a counter. CSG combinations can also be transformed into surface count. By compariring ray intersection counter and CSG surface count you can apply the CSG operations on the traced ray
If you're interested on doing CSG on triangulated models, the most common approach is to build BSP trees from the geometry and apply the CSG operations on the BSP. Then from the resulting BSP you recreate the mesh. This is how it's implemented in mesh based modellers (take a look at Blender's source code, which does exactly this)
CSG on analytical surfaces is extremely difficult. There are no closed solutions for the intersection of curves or curved surfaces. The best approach is to numerically find a number of sampling points in the intersection and fit a curve along the intersection. This can get numerically unstable.
Tesselation Phase Processing (this is what I implemented (or even invented maybe) for my 3D engine): When rendering curves or curved patches on 3D hardware, one usually must tesselate them into triangular meshes before. In this tesselation phase you can test if the edges of a newly created triangle intersect with curves/curved surfaces; use a few iterations in a Newton zero crossing solver to find the point of intersection of both curves/surfaces and store this as a sampling point at the edge for both patches involved (so that the tesselation of the other surface will share its vertices' positions with the first surface). After the first tesselation stage use a relaxation method (basically apply a Laplacian) on the vertices, while constraining them to the surface (remember that your surfaces are mathematical exact and it's very easy to fiddle with the variables of the surface, but use the resulting positions as metric). It works very well as long as not intersections with ordinary triangulated meshes are to be considered (each triangle of the mesh had to be turned into a surface patch, slowing down the method)
You tagged this OpenGL, so to get this straight: OpenGL can't help you there, as OpenGL is just drawing triangles, not processing complex geometry.
Citing OpenGl faq:
What is OpenGL?
OpenGL stands for Open Graphics
Library. It is an API for doing 3D
graphics.
In more specific terms, it is an API
that is used to "draw triangles on
your scene". In this age of GPUs, it
is about talking to the GPU so that it
does the job of drawing. It does not
deal with file formats. It does not
open bmp, png and any image format. It
does not open 3d object formats like
obj, max, maya. It does not do
animation. It does not handle
keyboard, mouse and any input devices.
It does not create a window, and so
on.
All that stuff should be handled by an
external library (GLUT is one example
that is used for creating and
destroying a window and handling mouse
and keyboard).
GL has gone through a number of
versions.
So the answer is no. Things like extrude cut are complex operations. You have to implement it by your own, ore use third party libraries.
I would like to draw voxels by using opengl but it doesn't seem like it is supported. I made a cube drawing function that had 24 vertices (4 vertices per face) but it drops the frame rate when you draw 2500 cubes. I was hoping there was a better way. Ideally I would just like to send a position, edge size, and color to the graphics card. I'm not sure if I can do this by using GLSL to compile instructions as part of the fragment shader or vertex shader.
I searched google and found out about point sprites and billboard sprites (same thing?). Could those be used as an alternative to drawing a cube quicker? If I use 6, one for each face, it seems like that would be sending much less information to the graphics card and hopefully gain me a better frame rate.
Another thought is maybe I can draw multiple cubes using one drawelements call?
Maybe there is a better method altogether that I don't know about? Any help is appreciated.
Drawing voxels with cubes is almost always the wrong way to go (the exceptional case is ray-tracing). What you usually want to do is put the data into a 3D texture and render slices depending on camera position. See this page: https://developer.nvidia.com/gpugems/GPUGems/gpugems_ch39.html and you can find other techniques by searching for "volume rendering gpu".
EDIT: When writing the above answer I didn't realize that the OP was, most likely, interested in how Minecraft does that. For techniques to speed-up Minecraft-style rasterization check out Culling techniques for rendering lots of cubes. Though with recent advances in graphics hardware, rendering Minecraft through raytracing may become the reality.
What you're looking for is called instancing. You could take a look at glDrawElementsInstanced and glDrawArraysInstanced for a couple of possibilities. Note that these were only added as core operations relatively recently (OGL 3.1), but have been available as extensions quite a while longer.
nVidia's OpenGL SDK has an example of instanced drawing in OpenGL.
First you really should be looking at OpenGL 3+ using GLSL. This has been the standard for quite some time. Second, most Minecraft-esque implementations use mesh creation on the CPU side. This technique involves looking at all of the block positions and creating a vertex buffer object that renders the triangles of all of the exposed faces. The VBO is only generated when the voxels change and is persisted between frames. An ideal implementation would combine coplanar faces of the same texture into larger faces.