I've been using OpenGL since some time now for making 3D applications, but I never really understood the use of the GL_POINT and GL_LINES primitive drawing types for 3D games in the production phase.
(Where) are point and line primitives in OpenGL still used in modern games?
You know, OpenGL is not just for games and there are other kind of programs than just games. Think CAD programs, or map editors, where wireframes are still very usefull.
GL_POINTS are used in games for point sprites (either via the pointsprite functionality or by generating a quad from a point in the geometry shader) both for "sparkle" effects and volumetric clouds.
They are also used in some special algorithms just when, well... when points are needed. Such as in building histograms in the geometry shader as by the chapter in one of the later GPU Gems books. Or, for GPU instance culling via transform feedback.
GL_LINES have little use in games (mostly useful for CAD or modelling apps). Besides not being needed often, if they are needed, you will normally want lines with a thickness greater than 1, which is not well supported (read as: fast) on all implementations.
In such a case, one usually draws thick lines with triangle strips.
Who ever said those primitives were used in modern games?
GL_LINES is critical for wireframe views in 3D modeling tools.
(Where) are point and line primitives in OpenGL still used in modern games?
Where do you want them to be used?
Under standard methods, points can be used to build point sprites, which are 2D flatcards that always face the camera and are of a particular size. They are always square in window-space. Sadly, the OpenGL specification makes using them somewhat dubious, as point sprites are clipped based on the center of the point, not the size of the two triangles that are used to render it.
Lines are perfectly reasonable for line drawing. Once upon a time, lines weren't available in consumer hardware, but they have been around for many years now. Of course, antialiased line rendering (GL_LINE_SMOOTH) is another matter.
More importantly is the interaction of these things with geometry shaders. You can convert points into a quad. Or a triangle. Or whatever you want, really. Each "point" is just an execution of the geometry shader. You can have points which contain the position and radius of a sphere, and the geometry shader can output a window-aligned quad that is the appropriate size for the fragment shader to do some raytracing logic on it.
GL_POINTS just means "one vertex per geometry shader". GL_LINES means "two vertices per geometry shader." How you use it is up to you.
I'd say for debugging purposes, but that is just from my own perspective.
Some primitives can be used in areas where you don't think they can be applied, such as a particle system.
I agree with Pompe de velo about lines being useful for debugging. They can be useful when debugging AI and collision detection algorithms so that you can visualize the data that is being used by the AI or collision detection. Some example uses for AI, the lines can be used to show AI paths or path meshes. Lines can be used to show steering data that the AI is using. Lines can be used to show what an AI is aiming at. The data that is shown can be displayed in text form but sometimes it is easier to see it in visual form.
In most cases particles are based on GL_POINT, considering that there can be a huge number of particles on the screen it would be very expensive to use 4 vertices per particle, so GL_POINT solves this problem
GL_LINES good for debugging purposes, wireframe mode can be used in various cases. As mentioned above - in CAD apps, but if you're interesed in gamedev use - it's good for a scene editor.
In terms of collision detection, they come in handy when you want to visualize bounding volumes(boxes,spheres,k-dops) and contact manifolds in wireframe mode. Setting the colour of these primitives based on the status of collisions as well is incredibly useful.
Related
After looking at some programs for 2d modeling, I noticed that all primitives are drawn as segments (see attached picture).
For example, why is the circle drawn as a polygon? It seems to me that it is much easier to create a shader that will draw a circle, regardless of the magnification (scaling)?
It is also interesting, These segments are drawn each separately or as one draw-call with a special shader for each shape?
What is the main reason that the developers chose this path? What they are trying to achieve?
3D graphics API support only triangles, dots and line segments - there is no built-in rendering primitives for drawing a circle or something like this. Therefore, the first two reasons for drawing all type of curves as a polyline are uniformity (you can render ANY type of curve as a set of line segments) and performance (line segments is the only native type supported by GPU). Drawing primitives of the same type using the same universal GLSL program allows rendering of many curves at once and reducing overall number of draw calls in optimized engine.
Moreover, you don't actually need a special GLSL program to avoid rough tessellation - just split your curve into more segments to make it appear smooth on the screen. You will have to balance between performance and quality, though - ideally, tessellation level should change dynamically basing on a zoom level and applied only to figures visible on the screen. This is not something trivial to implement, but it is much more straightforward when applied to 2D drawings than to 3D.
GLSL programs allow implementing various tricks, but rendering a fixed-width curve would require using a Tessellation Shader (or at least Geometry Shader), which WebGL doesn't support, or applying some dirty tricks! So I wouldn't say that drawing a thin circle of reliable quality via GLSL program will be that simple.
It is possible, though, rendering simple shapes like filled circle using just a Fragment Shader by drawing a rectangle and discarding fragments outside of the circle computed by circle equation. But that would be just a circle, a single solid circle, while there are a lot of other figures and combinations of them! Hence, again - uniformity and simplicity.
Indeed, there are applications implementing special GLSL programs for a limited set of commonly used figures, but these require a lot of development.
Until today, when I wanted to create reflections (a mirror) in opengl, I rendered a view into a texture and displayed that texture on the mirroring surface.
What i want to know is, are there any other methods to create a mirror in opengl?
And 2. can this be done lonely in shaders (e.g. geometry shader) ?
Ray-tracing. You can write a ray-tracer in the fragment shader (every fragment follows a ray). Ray-tracers can perfectly deal with reflection (mirroring) on all kinds of surfaces.
You can find an OpenGL example here and a WebGL example including mirroring here.
There are no universal way to do that, in any 3D API i know of.
Depending on your case there are several possible techniques with different downsides.
Planar reflections: That's what you are doing already.
Note that your mirror needs to be flat and you have to clip so anything closer than the mirror ins't rendered into the texture.
Good old cubemaps: attach a cubemap to each mirror then sample it in the reflection direction. This works for any surface but you will need to render the cubemaps (which can be done only once if you don't care about moving objects being reflected). I don't think you can do this without shaders but only the mirror will need one. Its a very common technique as it's easy do implement, can be dynamic and fairly cheap while being easy to integrate into an existing engine.
Screen space ray-marching: It's what danny-ruijters suggested. Kind of like SSAO : for each pixel, sample the depth buffer along the reflection vector until you hit something. This has the advantage to be applicable anywhere (on arbitrary complex surfaces) however it can only reflect stuff that appear on screen which can introduce lots of small artifacts but it's completly dynamic and very simple to implement. Note that you will need an additional pass (or rendering normals into a buffer) to access your scene final color in while computing the reflections. You absolutely need shaders for that, but it's post process so it won't interfere with the scene rendering if that's what you fear.
Some modern game engines use this to add small details to reflective surfaces without the burden of having to compute/store cubemaps.
They are probably many other ways to render mirrors but these are the tree main one (at least for what i know) ways of doing reflections.
To preface this question, I have a competent understanding of OpenGL and the maths behind it, and while I have never touched anything related to DirectX I imagine the concepts are similar.
There is plenty of information around about why triangles are used for 3D graphics (they are necessarily planar, are indivisible except into smaller triangles, etc). However, I would like to know if triangles are merely a convenient way of storing and manipulating 3D data (simpler maths regarding interpolation, etc), or if there is a hardware limitation in the graphics card that only realistically allows the rendering of triangles (e.g. instructions that can essentially ONLY be applied to triangles).
Following on from this, is there any way to achieve pixel-by-pixel control of graphics rendering (as outlined briefly by the answer to this question). While I appreciate direct control over individual pixels is done through a driver, is there any way I can get this kind of control over a rendering environment? Is there away to 'avoid triangles' completely?
Yes and no. Kind of.
Current GPUs are designed to render triangles because triangles are nice to work with. And because current GPUs are designed to work with triangles, people use triangles and so GPUs only need to process triangles, and so they're designed to process only triangles.
As you say, triangles just have advantages that make them convenient to use. GPUs can be made (and have been made) to render other primitives natively, but it's just not really worth it. If you tell a modern GPU to render a quad, it splits it up into two triangles and renders those.
Not because there's a technical reason why a GPU can't render quads natively, but because it's not worth spending transistors on. It's much more useful to focus the GPU on doing triangles as fast as possible, and then just emulate other primitives if they're needed.
So yes, modern GPUs have hardware limitations so they don't work with quads, for example, but not because it's impossible to design a GPU which works with quads. It'd just be less efficient to do so. :)
As for "avoiding triangles", sure, that's basically what the fragment shader does: it fills in one single pixel. The GPU just runs it a few million times in parallel to fill in the entire screen. You could draw two big triangles, which form a quad filling the entire screen, and then just specify a fragment shader which fills that with the content you like.
If you want more control over the process, do it in software instead: paint one pixel at a time to a memory surface, and then load that as a texture on the GPU. But it's slow. :)
As far as i know every modern CAN render quads and some even N-gons but it comparing the render time of a quad to 2 triangles shows the triangle advantage.
This is mainly because GPU's have been optimized to render triangles and that the accual hardware has way more "steam processors" (for triangles) then others such as textures ones. Some other processor types on the GPU can render quads directly but normally you would find a thousand steam to a few texture processors
Note that getting a texure unit to render a quad is EXTREMELY difficult. It is possible in theory but no one used the pricip for a serius case.
Unless you work with very hardware close operation the software will take care of the triangles, (eg, Auto-Convert them from quads)
I am unsure of how to describe what I'm after, so I drew a picture to help:
My question, is it possible within OpenGL to create the illusion of those pixel looking bumps on a single polygon, without having to resort to using many polygons? And if it is, what's the method?
I think what your looking for is actually Parallax mapping (Or Parallax Occlusion mapping).
Demos:
http://www.youtube.com/watch?v=01owTezYC-w
http://www.youtube.com/watch?v=gcAsJdo7dME&NR=1
http://www.youtube.com/watch?v=njKdLvmBl88
Parralax mapping basically works by using the height map to alter the texture UV coordinate being used.
The main disadvantage to parallax is that anything that appears to be 'outside' the polygon will be clipped (think of looking at an image on a 3D tv), so it's best for things indented in a surface rather than sticking out of it (although you can reduce this by making the polygon lager than the visible texture area). It's also fairly complex and would need to be combined with other shader techniques for a good effect.
Bump mapping works by using a texture for normal's, this makes the light's shading appear to be 3D however it does not change 3D data depending on the position of the viewer only the shading. Bump mapping would also be fairly useless for the OP's sample image since the surface is all the same angle just at different heights, bump mapping relies on the changes in the surfaces angles. You would have to slope the edges like this.
Displacement mapping/tessellation uses a texture to generate more polygons rather than just being 1 polygon.
There's a video comparing all 3 here
EDIT: There is also Relief mapping, which is a similar to parallax. See demo. There's a comparison video too (it's a bit lowquality but relief looks like it gives better depth).
I think what you're after is bump mapping. The link goes to a simple tutorial.
You may also be thinking of displacement mapping.
Of the techniques mentioned in other people's answers:
Bump mapping is the easiest to achieve, but doesn't do any occlusion.
Parallax mapping is probably the most complex to achieve, and doesn't work well in all cases.
Displacement mapping requires high-end hardware and drivers, and creates additional geometry.
Actually modeling the polygons is always an option.
It really depends on how close you expect the viewer to be and how prominent the bumps are. If you're flying down the Death Star trench, you'll need to model the bumps or use displacement mapping. If you're a few hundred meters up, bumpmapping should suffice.
If you have DX11 class hardware then you could tessellate the polygon and then apply displacement mapping. See http://developer.nvidia.com/node/24. But then it gets a little complicated to get it running and develop something on top of it.
I would like to draw voxels by using opengl but it doesn't seem like it is supported. I made a cube drawing function that had 24 vertices (4 vertices per face) but it drops the frame rate when you draw 2500 cubes. I was hoping there was a better way. Ideally I would just like to send a position, edge size, and color to the graphics card. I'm not sure if I can do this by using GLSL to compile instructions as part of the fragment shader or vertex shader.
I searched google and found out about point sprites and billboard sprites (same thing?). Could those be used as an alternative to drawing a cube quicker? If I use 6, one for each face, it seems like that would be sending much less information to the graphics card and hopefully gain me a better frame rate.
Another thought is maybe I can draw multiple cubes using one drawelements call?
Maybe there is a better method altogether that I don't know about? Any help is appreciated.
Drawing voxels with cubes is almost always the wrong way to go (the exceptional case is ray-tracing). What you usually want to do is put the data into a 3D texture and render slices depending on camera position. See this page: https://developer.nvidia.com/gpugems/GPUGems/gpugems_ch39.html and you can find other techniques by searching for "volume rendering gpu".
EDIT: When writing the above answer I didn't realize that the OP was, most likely, interested in how Minecraft does that. For techniques to speed-up Minecraft-style rasterization check out Culling techniques for rendering lots of cubes. Though with recent advances in graphics hardware, rendering Minecraft through raytracing may become the reality.
What you're looking for is called instancing. You could take a look at glDrawElementsInstanced and glDrawArraysInstanced for a couple of possibilities. Note that these were only added as core operations relatively recently (OGL 3.1), but have been available as extensions quite a while longer.
nVidia's OpenGL SDK has an example of instanced drawing in OpenGL.
First you really should be looking at OpenGL 3+ using GLSL. This has been the standard for quite some time. Second, most Minecraft-esque implementations use mesh creation on the CPU side. This technique involves looking at all of the block positions and creating a vertex buffer object that renders the triangles of all of the exposed faces. The VBO is only generated when the voxels change and is persisted between frames. An ideal implementation would combine coplanar faces of the same texture into larger faces.