I would like to implement a typical CAD software and therefore need an edge detection algorithm to draw the silhouette of various meshes. Silhouette includes outline, ridges and creases of various objects. Here is an example of a cube created in Blender where the silhouette is made of thick orange lines:
I want to use a geometrical approach where wireframes are drawn on top of the objects and interior lines like diagonals are omitted. The wireframe rendering is described here. In this article, the geometry shader is used to draw the wireframe.
It is also explained that one has to set a per-vertex attribute to decide if a line should be omitted or not.
My question is: How could I decide which lines I have to omit? I use OpenGL as rendering API by the way.
EDIT: To clarify, I really want to draw just the edges that constitutes the silhouette but not any diagonals. Here is an example of what I want to achieve:
From your sample pictures I infer that you want to enhance
the silhouette edges, i.e. those that belong to the outline of the projections,
the salient edges, i.e. those that join two angled faces.
The former are determined by looking at the orientation of the faces: a face is "facing" when the observer is outside the half-space it delimits and conversely. A silhouette edge is one that belongs to a facing face and a non-facing face. Note that this is a viewer-dependent property.
A salient edge is such that it joins two faces forming a sufficiently large angle that the connection is considered non-smooth. (The angle threshold is up to you.) This is a viewer-independent property.
Consider freestyle
https://www.blender.org/manual/en/render/freestyle/index.html
Freestyle is an edge- and line-based non-photorealistic (NPR) rendering >engine. It relies on mesh data and z-depth information to draw lines on >selected edge types. Various line styles can be added to produce >artistic (“hand drawn”, “painted”, etc.) or technical (hard line) looks.
I haven't used it yet, but am planning to give it a try for creating line drawings from 3d models.
Related
What's the best way to draw part of sphere in, for example, OpenGL, considering I have vertices of boundaries of region that should be rendered?
I'm drawing sphere using octahedron transformation (described here: https://stackoverflow.com/a/7687312/1840136) and I can draw arcs that represent boundaries in same way by creating intermediate vertices and then "normalizing" them.
To create triangles out of plane I can use something from this answer: https://math.stackexchange.com/a/1814637, but thing is it will be still flat something. To get part of sphere, I definitely need another bunch of intermediate vertices for additional triangles. What is the algorithms for such task? And, as I already may have triangles forming original sphere, can I use this data somehow?
I have a very simple case. I want to draw the outline of an object, in this case I think they'll only be spheres, but I'd like to not rely on this.
I've found methods such as:
Draw the object to the stencil buffer
Turn on wireframe mode
Draw the object with thick lines
Draw the real object on the top
The problem I have with this method is that my models have a lot of vertices, and this requires me to draw it three times. I'm getting some significant frame rate drops.
Are there other ways to do this? My next guess would be to draw circles on the final render as a post-process effect, seeing as I'm only looking at spheres. But I'd much much rather do this for more than just spheres.
Is there something I can do in an existing shader to outline?
I'd also like the outline to appear when the object is behind others.
I'm using OpenGL 4.3.
I know 3 ways of doing contour rendering:
Using the stencil buffer
The first one is a slightly modified version of the one you described: you first render your object as normal with stencil buffer on, then you slightly scale it and render it plain color where the stencil buffer is not filled. You can find an explanation of this technique here.
Using image processing techniques
The second one is a post-process step, where you look for edges using image processing filters (like the sobel operator) and you compose your rendering with your contour detection result. The good thing with the sobel operator is that it is separable; this means you can do the detection in two 1D passes, which is more efficient that doing one 2D pass.
Using the geometry shader
Last but not least, you can use the geometry shader to extract the silhouette of your mesh. The idea is to use adjacent vertices of a triangle to detect if one edge of this triangle (let call it t0) is a contour.
To do this, for each edge ei of t0:
build a new triangle ti using the vertices of ei and its associated vertex,
compute the normal ni of ti, and the normal n0 of t0, transform them both in view space (the silhouette depends on the point of view),
compute the dot product between n0 and ni. If its value is negative, this means that the normals are in opposite directions and the edge ei is a silhouette edge.
You then build a quad around ei, emit each of its vertices and color them the way you want in the fragment shader.
This is the basic idea of this algorithm. Using only this will result in aliased edges, with holes between them, but this can be improved. You can read this paper, and this blog post for further informations.
In my experience you get good results if you render the outlined object in white (unlit) to a texture as big as the final framebuffer, then draw a framebuffer-sized quad with that texture and have the fragment shader blur or otherwise process it and apply the desired color.
I have an example here
I am new to OpenGL.
I want to draw an object which has 4 vertices. It is like a quad object, but for bottom side I need to draw an arc. Other sides are connected with straight lines. I want to fill the object.
Can anybody guide me to do this please?
Triangulate your shape and render those triangles any way you prefer (immediate mode / VBO / VAO).
Convert your arc shape into segments. Number of vertices depends on detalization/smoothness you want to achieve.
Triangulate the shape. With simple shapes, like this one, you can do it manually in code (draw it on paper like I did and write down vertices indexes that form triangles). With more complicated shapes you could use a triangulation algorithms (available on Net). When shapes are even more complicated (i.e. animal outline) - you might need to use special 2D/3D modelling software just to make them, and it will do triangulation in there.
Render the triangles.
Greetings all,
As seen in the image , I draw lots of contours using GL_LINE_STRIP.
But the contours look like a mess and I wondering how I can make this look good.(to see the depth..etc )
I must render contours so , i have to stick with GL_LINE_STRIP.I am wondering how I can enable lighting for this?
Thanks in advance
Original image
http://oi53.tinypic.com/287je40.jpg
Lighting contours isn't going to do much good, but you could use fog or manually set the line colors based on distance (or even altitude) to give a depth effect.
Updated:
umanga, at first I thought lighting wouldn't work because lighting is based on surface normal vectors - and you have no surfaces. However #roe pointed out that normal vectors are actually per vertex in OpenGL, and as such, any POLYLINE can have normals. So that would be an option.
It's not entirely clear what the normal should be for a 3D line, as #Julien said. The question is how to define normals for the contour lines such that the resulting lighting makes visual sense and helps clarify the depth?
If all the vertices in each contour are coplanar (e.g. in the XY plane), you could set the 3D normal to be the 2D normal, with 0 as the Z coordinate. The resulting lighting would give a visual sense of shape, though maybe not of depth.
If you know the slope of the surface (assuming there is a surface) at each point along the line, you could use the surface normal and do a better job of showing depth; this is essentially like a hill-shading applied only to the contour lines. The question then is why not display the whole surface?
End of update
+1 to Ben's suggestion of setting the line colors based on altitude (is it topographic contours?) or based on distance from viewer. You could also fill the polygon surrounded by each contour with a similar color, as in http://en.wikipedia.org/wiki/File:IsraelCVFRtopography.jpg
Another way to make the lines clearer would be to have fewer of them... can you adjust the density of the contours? E.g. one contour line per 5ft height difference instead of per 1ft, or whatever the units are. Depending on what it is you're drawing contours of.
Other techniques for elucidating depth include stereoscopy, and rotating the image in 3D while the viewer is watching.
If your looking for shading then you would normally convert the contours to a solid. The usual way to do that is to build a mesh by setting up 4 corner points at zero height at the bounds or beyond then dropping the contours into the mesh and getting the mesh to triangulate the coords in. Once done you then have a triangulated solid hull for which you can find the normals and smooth them over adjacent faces to create smooth terrain.
To triangulate the mesh one normally uses the Delaunay algorithm which is a bit of a beast but there does exist libraries for doing it. The best of which I know of is the ones based on Guibas as Stolfi papers since its pretty optimal.
To generate the normals you do a simple cross product and ensure the facing is correct and manually renormalize them before feeding into the glNormal.
The in the old days you used to make a glList out of the result but the newer way is to make a vertex array. If you want to be extra flash then you can look for coincident planar faces and optimize the mesh down for faster redraw but thats a bit of a black art - good for games, not so good for CAD.
(thx for bonus last time)
Circles are one of the basics geometric entities. Yet there is no primitives defined in OpenGL for this, like lines or polygons. Why so? It's a little annoying to include custom headers for this all the time!
Any specific reason to omit it?
While circles may be basic shapes they aren't as basic as points, lines or triangles when it comes to rasterisation. The first graphic cards with 3D acceleration were designed to do one thing very well, rasterise triangles (and lines and points because they were trivial to add). Adding any more complex shapes would have made the card a lot more expensive while adding only little functionality.
But there's another reason for not including circles/ellipses. They don't connect. You can't build a 3D model out of them and you can't connect triangles to them without adding gaps or overlapping parts. So for circles to be useful you also need other shapes like curves and other more advanced surfaces (e.g. NURBS). Circles alone are only useful as "big points" which can also be done with a quad and a circle shaped texture, or triangles.
If you are using "custom headers" for circles you should be aware that those probably create a triangle model that form your "circles".
Because historically, video cards have rendered points, lines, and triangles.
You calculate curves using short enough lines so the video card doesn't have to.
Because graphic cards operate on 3-dimensional points, lines and triangles. A circle requires curves or splines. It cannot be perfectly represented by a "normal" 3D primitive, only approximated as an N-gon (so it will look like a circle at a certain distance). If you want a circle, write the routine yourself (it isn't hard to do). Either draw it as an N-gon, or make a square (2 triangles) and cut a circle out of it it using fragment shader (you can get a perfect circle this way).
You could always use gluSphere (if a three-dimensional shape is what you're looking for).
If you want to draw a two-dimensional circle you're stuck with custom methods. I'd go with a triangle fan.
The primitives are called primitives for a reason :)