Why are simple 2D shapes drawn in CAD editors in segments - opengl

After looking at some programs for 2d modeling, I noticed that all primitives are drawn as segments (see attached picture).
For example, why is the circle drawn as a polygon? It seems to me that it is much easier to create a shader that will draw a circle, regardless of the magnification (scaling)?
It is also interesting, These segments are drawn each separately or as one draw-call with a special shader for each shape?
What is the main reason that the developers chose this path? What they are trying to achieve?

3D graphics API support only triangles, dots and line segments - there is no built-in rendering primitives for drawing a circle or something like this. Therefore, the first two reasons for drawing all type of curves as a polyline are uniformity (you can render ANY type of curve as a set of line segments) and performance (line segments is the only native type supported by GPU). Drawing primitives of the same type using the same universal GLSL program allows rendering of many curves at once and reducing overall number of draw calls in optimized engine.
Moreover, you don't actually need a special GLSL program to avoid rough tessellation - just split your curve into more segments to make it appear smooth on the screen. You will have to balance between performance and quality, though - ideally, tessellation level should change dynamically basing on a zoom level and applied only to figures visible on the screen. This is not something trivial to implement, but it is much more straightforward when applied to 2D drawings than to 3D.
GLSL programs allow implementing various tricks, but rendering a fixed-width curve would require using a Tessellation Shader (or at least Geometry Shader), which WebGL doesn't support, or applying some dirty tricks! So I wouldn't say that drawing a thin circle of reliable quality via GLSL program will be that simple.
It is possible, though, rendering simple shapes like filled circle using just a Fragment Shader by drawing a rectangle and discarding fragments outside of the circle computed by circle equation. But that would be just a circle, a single solid circle, while there are a lot of other figures and combinations of them! Hence, again - uniformity and simplicity.
Indeed, there are applications implementing special GLSL programs for a limited set of commonly used figures, but these require a lot of development.

Related

Opengl terrain and tessellation shaders

Two questions:
How do modern games set up their terrain vertices? Do they attach a height map image to a texture and then use it to set each vertex position, or do they just use a 3D software (like Blender) to create a file that contains these vertices and then read it to a VBO? Please correct me if my grasp is incorrect.
How important are tessellation shaders to this process? Do they just save performance or do they also change the viewer's scene?
The two most common I have seen are heightmaps, in which the RGB value is used for surface normal and the alpha value is used for heights, and procedural terrain generation using a method such as Perlin Noise, that use a random function and sample their surrounding vertices to even out the height.
Tesselation shaders are used primarily in decreasing workload by simplifying far away meshes in which you would not notice the extra detail. They do change the viewers scene, but in a way that is attempting to not be noticed.
Generally height are generated procedurally in shaders for vertices.
By procedurally in computer graphics it means by some mathematics algorithm. Perlin noise is one of the methods for this procedural generation. There are several strategies keep the height map of small size and produce different heights using procedural method this is done as height map is texture and that uses bandwidth.
Tessellation shaders are used along for adaptive tessellation. You can think of it as some kind of level of detail mechanism. Smoothness of terrain depends upon how many triangles are used to represent patch on terrain. Depending on the distance of pixel from camera developers can decide what should be tessellation level on the fly and generate more triangles for patches close to user. This is way to improve details on the terrain. Everything here is happening on the GPU so its extremely efficient.
Previous to tessellation shaders were accessibe there were algorithms like ROAR which used to do adaptive tessellation on the CPU.
Please follow http://vterrain.org/ this project. You will see all state of the terrain techniques implemented here.

Programming my own triangle rasterization for OpenGL?

I am trying to render rounded triangles to increase performance. To illustrate what I mean, see the picture below:
I tried in the CPU, now is there a way to move this algorithm somehow to the GPU? I can change the method's code that calls the fragment shader?
By the way if I can do it, then what programming language I need to re-make it to?
I am using an OpenGL 2.1 GPU with just 20GB-30GB memory bandwidth.
Read the paper Resolution Independent Curve Rendering using Programmable Graphics Hardware by Charles Loop and Jim Blinn.
Short version: assuming you have an efficient inside/outside test for your curve, render the enclosing hull shape as triangle(s), use a fragment shader to discard the pixels outside the curve.
Second the concern by Aeluned that transferring the algorithm to the GPU won't automatically make it faster.
I'm not sure exactly what you're up to, but it seems a bit dubious. You can actually end up hurting performance trying to do some of these custom calculations in a shader to render a circle or ellipse.
Modern GPU hardware can push billions of triangles a second. You're probably splitting hairs here.
In any case, if you want to 'fake' the geometry, this may be of interest to you: https://alfonse.bitbucket.io/oldtut/Illumination/Tutorial%2013.html
Well on OpenGL 2.1 you do not have geometry shaders (3.2+) so you can forget about GPU.
You can not improve rasterizing performance of convex shapes by your curved triangles
complexity of rasterization of any convex polygon is the same as any triangle of the same area
the difference is only in:
number of passing vertexes
memory transfer
with geometry shader this will be better with your triangles usage
number of boundary lines
boundary lines rasterization for filling
will be worse with your triangles usage
(need to join more triangles instead of single shape polygon)
So its not a good idea to implement this for better performance in your case.
The only thing i can think of to use this for is to ease up manual generation of shapes.
In that case just write a function glRoundedTriangle(....)
which generate the correct vertexes,colors,normals,texture coordinates from given input parameters.
how it would looked like is unknown because you did not specify the rounded triangle geometry/shape and/or input parameters (for example 3 points + 3 signed curve radiuses ?)
To improve performance in OpenGL use VBO/VAO

GL_POINT and GL_LINES - real use?

I've been using OpenGL since some time now for making 3D applications, but I never really understood the use of the GL_POINT and GL_LINES primitive drawing types for 3D games in the production phase.
(Where) are point and line primitives in OpenGL still used in modern games?
You know, OpenGL is not just for games and there are other kind of programs than just games. Think CAD programs, or map editors, where wireframes are still very usefull.
GL_POINTS are used in games for point sprites (either via the pointsprite functionality or by generating a quad from a point in the geometry shader) both for "sparkle" effects and volumetric clouds.
They are also used in some special algorithms just when, well... when points are needed. Such as in building histograms in the geometry shader as by the chapter in one of the later GPU Gems books. Or, for GPU instance culling via transform feedback.
GL_LINES have little use in games (mostly useful for CAD or modelling apps). Besides not being needed often, if they are needed, you will normally want lines with a thickness greater than 1, which is not well supported (read as: fast) on all implementations.
In such a case, one usually draws thick lines with triangle strips.
Who ever said those primitives were used in modern games?
GL_LINES is critical for wireframe views in 3D modeling tools.
(Where) are point and line primitives in OpenGL still used in modern games?
Where do you want them to be used?
Under standard methods, points can be used to build point sprites, which are 2D flatcards that always face the camera and are of a particular size. They are always square in window-space. Sadly, the OpenGL specification makes using them somewhat dubious, as point sprites are clipped based on the center of the point, not the size of the two triangles that are used to render it.
Lines are perfectly reasonable for line drawing. Once upon a time, lines weren't available in consumer hardware, but they have been around for many years now. Of course, antialiased line rendering (GL_LINE_SMOOTH) is another matter.
More importantly is the interaction of these things with geometry shaders. You can convert points into a quad. Or a triangle. Or whatever you want, really. Each "point" is just an execution of the geometry shader. You can have points which contain the position and radius of a sphere, and the geometry shader can output a window-aligned quad that is the appropriate size for the fragment shader to do some raytracing logic on it.
GL_POINTS just means "one vertex per geometry shader". GL_LINES means "two vertices per geometry shader." How you use it is up to you.
I'd say for debugging purposes, but that is just from my own perspective.
Some primitives can be used in areas where you don't think they can be applied, such as a particle system.
I agree with Pompe de velo about lines being useful for debugging. They can be useful when debugging AI and collision detection algorithms so that you can visualize the data that is being used by the AI or collision detection. Some example uses for AI, the lines can be used to show AI paths or path meshes. Lines can be used to show steering data that the AI is using. Lines can be used to show what an AI is aiming at. The data that is shown can be displayed in text form but sometimes it is easier to see it in visual form.
In most cases particles are based on GL_POINT, considering that there can be a huge number of particles on the screen it would be very expensive to use 4 vertices per particle, so GL_POINT solves this problem
GL_LINES good for debugging purposes, wireframe mode can be used in various cases. As mentioned above - in CAD apps, but if you're interesed in gamedev use - it's good for a scene editor.
In terms of collision detection, they come in handy when you want to visualize bounding volumes(boxes,spheres,k-dops) and contact manifolds in wireframe mode. Setting the colour of these primitives based on the status of collisions as well is incredibly useful.

OpenGL, applying texture from image to isosurface

I have a program in which I need to apply a 2-dimensional texture (simple image) to a surface generated using the marching-cubes algorithm. I have access to the geometry and can add texture coordinates with relative ease, but the best way to generate the coordinates is eluding me.
Each point in the volume represents a single unit of data, and each unit of data may have different properties. To simplify things, I'm looking at sorting them into "types" and assigning each type a texture (or portion of a single large texture atlas).
My problem is I have no idea how to generate the appropriate coordinates. I can store the location of the type's texture in the type class and use that, but then seams will be horribly stretched (if two neighboring points use different parts of the atlas). If possible, I'd like to blend the textures on seams, but I'm not sure the best manner to do that. Blending is optional, but I need to texture the vertices in some fashion. It's possible, but undesirable, to split the geometry into parts for each type, or to duplicate vertices for texturing purposes.
I'd like to avoid using shaders if possible, but if necessary I can use a vertex and/or fragment shader to do the texture blending. If I do use shaders, what would be the most efficient way of telling it was texture or portion to sample? It seems like passing the type through a parameter would be the simplest way, but possible slow.
My volumes are relatively small, 8-16 points in each dimension (I'm keeping them smaller to speed up generation, but there are many on-screen at a given time). I briefly considered making the isosurface twice the resolution of the volume, so each point has more vertices (8, in theory), which may simplify texturing. It doesn't seem like that would make blending any easier, though.
To build the surfaces, I'm using the Visualization Library for OpenGL and its marching cubes and volume system. I have the geometry generated fine, just need to figure out how to texture it.
Is there a way to do this efficiently, and if so what? If not, does anyone have an idea of a better way to handle texturing a volume?
Edit: Just to note, the texture isn't simply a gradient of colors. It's actually a texture, usually with patterns. Hence the difficulty in mapping it, a gradient would've been trivial.
Edit 2: To help clarify the problem, I'm going to add some examples. They may just confuse things, so consider everything above definite fact and these just as help if they can.
My geometry is in cubes, always (loaded, generated and saved in cubes). If shape influences possible solutions, that's it.
I need to apply textures, consisting of patterns and/or colors (unique ones depending on the point's "type") to the geometry, in a technique similar to the splatting done for terrain (this isn't terrain, however, so I don't know if the same techniques could be used).
Shaders are a quick and easy solution, although I'd like to avoid them if possible, as I mentioned before. Something usable in a fixed-function pipeline is preferable, mostly for the minor increase in compatibility and development time. Since it's only a minor increase, I will go with shaders and multipass rendering if necessary.
Not sure if any other clarification is necessary, but I'll update the question as needed.
On the texture combination part of the question:
Have you looked into 3d textures? As we're talking marching cubes I should probably immediately say that I'm explicitly not talking about volumetric textures. Instead you stack all your 2d textures into a 3d texture. You then encode each texture coordinate to be the 2d position it would be and the texture it would reference as the third coordinate. It works best if your textures are generally of the type where, logically, to transition from one type of pattern to another you have to go through the intermediaries.
An obvious use example is texture mapping to a simple height map — you might have a snow texture on top, a rocky texture below that, a grassy texture below that and a water texture at the bottom. If a vertex that references the water is next to one that references the snow then it is acceptable for the geometry fill to transition through the rock and grass texture.
An alternative is to do it in multiple passes using additive blending. For each texture, draw every face that uses that texture and draw a fade to transparent extending across any faces that switch from one texture to another.
You'll probably want to prep the depth buffer with a complete draw (with the colour masks all set to reject changes to the colour buffer) then switch to a GL_EQUAL depth test and draw again with writing to the depth buffer disabled. Drawing exactly the same geometry through exactly the same transformation should produce exactly the same depth values irrespective of issues of accuracy and precision. Use glPolygonOffset if you have issues.
On the coordinates part:
Popular and easy mappings are cylindrical, box and spherical. Conceptualise that your shape is bounded by a cylinder, box or sphere with a well defined mapping from surface points to texture locations. Then for each vertex in your shape, start at it and follow the normal out until you strike the bounding geometry. Then grab the texture location that would be at that position on the bounding geometry.
I guess there's a potential problem that normals tend not to be brilliant after marching cubes, but I'll wager you know more about that problem than I do.
This is a hard and interesting problem.
The simplest way is to avoid the issue completely by using 3D texture maps, especially if you just want to add some random surface detail to your isosurface geometry. Perlin noise based procedural textures implemented in a shader work very well for this.
The difficult way is to look into various algorithms for conformal texture mapping (also known as conformal surface parametrization), which aim to produce a mapping between 2D texture space and the surface of the 3D geometry which is in some sense optimal (least distorting). This paper has some good pictures. Be aware that the topology of the geometry is very important; it's easy to generate a conformal mapping to map a texture onto a closed surface like a brain, considerably more complex for higher genus objects where it's necessary to introduce cuts/tears/joins.
You might want to try making a UV Map of a mesh in a tool like Blender to see how they do it. If I understand your problem, you have a 3D field which defines a solid volume as well as a (continuous) color. You've created a mesh from the volume, and now you need to UV-map the mesh to a 2D texture with texels extracted from the continuous color space. In a tool you would define "seams" in the 3D mesh which you could cut apart so that the whole mesh could be laid flat to make a UV map. There may be aliasing in your texture at the seams, so when you render the mesh it will also be discontinuous at those seams (ie a triangle strip can't cross over the seam because it's a discontinuity in the texture).
I don't know any formal methods for flattening the mesh, but you could imagine cutting it along the seams and then treating the whole thing as a spring/constraint system that you drop onto a flat surface. I'm all about solving things the hard way. ;-)
Due to the issues with texturing and some of the constraints I have, I've chosen to write a different algorithm to build the geometry and handle texturing directly in that as it produces surfaces. It's somewhat less smooth than the marching cubes, but allows me to apply the texcoords in a way that works for my project (and is a bit faster).
For anyone interested in texturing marching cubes, or just blending textures, Tommy's answer is a very interesting technique and the links timday posted are excellent resources on flattening meshes for texturing. Thanks to both of them for their answers, hopefully they can be of use to others. :)

Why is there no circle or ellipse primitive in OpenGL?

Circles are one of the basics geometric entities. Yet there is no primitives defined in OpenGL for this, like lines or polygons. Why so? It's a little annoying to include custom headers for this all the time!
Any specific reason to omit it?
While circles may be basic shapes they aren't as basic as points, lines or triangles when it comes to rasterisation. The first graphic cards with 3D acceleration were designed to do one thing very well, rasterise triangles (and lines and points because they were trivial to add). Adding any more complex shapes would have made the card a lot more expensive while adding only little functionality.
But there's another reason for not including circles/ellipses. They don't connect. You can't build a 3D model out of them and you can't connect triangles to them without adding gaps or overlapping parts. So for circles to be useful you also need other shapes like curves and other more advanced surfaces (e.g. NURBS). Circles alone are only useful as "big points" which can also be done with a quad and a circle shaped texture, or triangles.
If you are using "custom headers" for circles you should be aware that those probably create a triangle model that form your "circles".
Because historically, video cards have rendered points, lines, and triangles.
You calculate curves using short enough lines so the video card doesn't have to.
Because graphic cards operate on 3-dimensional points, lines and triangles. A circle requires curves or splines. It cannot be perfectly represented by a "normal" 3D primitive, only approximated as an N-gon (so it will look like a circle at a certain distance). If you want a circle, write the routine yourself (it isn't hard to do). Either draw it as an N-gon, or make a square (2 triangles) and cut a circle out of it it using fragment shader (you can get a perfect circle this way).
You could always use gluSphere (if a three-dimensional shape is what you're looking for).
If you want to draw a two-dimensional circle you're stuck with custom methods. I'd go with a triangle fan.
The primitives are called primitives for a reason :)