occlusion culling test 3d - hidden

Suppose we have a body,whose shell is represented by flat triangles (Polygon Mesh). The mesh is consist of millions of polygon for the needed accuracy level. Furthermore, the body is not convex (it is not a sphere, watch wiki page “Polygon mesh” for a good example of the body- the dolphin).
For a certain application I would like to know which of the polygons are seen from a certain point. I would like to emphasis the test whether a triangle is seen or not could be made to the center point, therefore, a triangle might be seen (if we see its center point) or be hidden, I am not interested in half seen polygons.
Can you recommend me a good way to do so, I would like to have the data about the polygon visibility as an output to the memory (programming variable-so I will be able to make further use with the data) and not to the screen

Related

Edge detection with Single-Pass Wireframe Rendering

I would like to implement a typical CAD software and therefore need an edge detection algorithm to draw the silhouette of various meshes. Silhouette includes outline, ridges and creases of various objects. Here is an example of a cube created in Blender where the silhouette is made of thick orange lines:
I want to use a geometrical approach where wireframes are drawn on top of the objects and interior lines like diagonals are omitted. The wireframe rendering is described here. In this article, the geometry shader is used to draw the wireframe.
It is also explained that one has to set a per-vertex attribute to decide if a line should be omitted or not.
My question is: How could I decide which lines I have to omit? I use OpenGL as rendering API by the way.
EDIT: To clarify, I really want to draw just the edges that constitutes the silhouette but not any diagonals. Here is an example of what I want to achieve:
From your sample pictures I infer that you want to enhance
the silhouette edges, i.e. those that belong to the outline of the projections,
the salient edges, i.e. those that join two angled faces.
The former are determined by looking at the orientation of the faces: a face is "facing" when the observer is outside the half-space it delimits and conversely. A silhouette edge is one that belongs to a facing face and a non-facing face. Note that this is a viewer-dependent property.
A salient edge is such that it joins two faces forming a sufficiently large angle that the connection is considered non-smooth. (The angle threshold is up to you.) This is a viewer-independent property.
Consider freestyle
https://www.blender.org/manual/en/render/freestyle/index.html
Freestyle is an edge- and line-based non-photorealistic (NPR) rendering >engine. It relies on mesh data and z-depth information to draw lines on >selected edge types. Various line styles can be added to produce >artistic (“hand drawn”, “painted”, etc.) or technical (hard line) looks.
I haven't used it yet, but am planning to give it a try for creating line drawings from 3d models.

OpenGL/OpenTK Fill Interior Space

I am looking for a way to "fill" three-dimensional geometry with color, and quite possibly a texture at some time later on.
Suppose for a moment that you could physically phase your head into a concrete wall, logically you would see only darkness. In OpenGL, however, when you do this the world is naturally hollow and transparent due to culling and because of how the geometry is drawn. I want to simulate the darkness/color/texture within it instead.
I know some games do this by overlaying a texture/color directly over the hud--therefore blinding the player.
Is there another way to do this, though? Suppose the player is standing half in water; they can partially see below the waves. How would you fill it to prevent them from being able to see clearly below what is now half of their screen?
What is this concept even called?
A problem with the texture-in-front-of-the-camera method is a texture is 2D but you want to visualize a slice of a 3D volume. For the first thing you talk about, the head-inside-a-wall idea, I'll point you to "3D/volume texturing". For standing-half-in-water, you're after "volume rendering" with "absorption" (discussed by #user3670102).
3D texturing
The general idea here is you have some function that defines a colour everywhere in a 3D space, not just on a surface (as with regular texture mapping). This is nice because you can put geometry anywhere and colour it in the fragment shader based on the 3D position. Think of taking a slice through the volume and looking at the intersection colour.
For the head-in-a-wall effect you could draw a full screen polygon in front of the player (right on the near clipping plane, although you might want to push this forwards a bit so its not too small) and colour it based on a 3D function. Now it'll look properly solid and move ad the player does and not like you've cheaply stuck a texture over the screen.
The actual function could be defined with a 3D texture but that's very memory intensive. Instead, you could look into either procedural 3D colour (a procedural wood or brick shader is pretty common as an example). Even assuming a 2D texture is "extruded" through the volume will work, or better yet weight 3 textures (one for each axis) based on the angle of the intersection/surface you're drawing on.
Detecting an intersection with the geometry and the near clipping plane is probably the hardest bit here. If I were you I'd look at tricks with the z-buffer and make sure to draw everything as solid non-self-intersecting geometry. A simple idea might be to draw back faces only after drawing everything with front faces. If you can see back faces that part of the near plane must be inside something. For these pixels you could calculate the near clipping plane position in world space and apply a 3D texture. Though I suspect there are faster ways than drawing everything twice.
In reality there would probably be no light getting to what you see and it should be black, but I guess just ignore this and render the colour directly, unlit.
Absorption
This sounds way harder than it actually is. If you have some transparent solid that's all the one colour ("homogeneous") then it removes light the further light has to travel through it. Think of many alpha-transparent surfaces, take the limit and you have an exponential. The light remaining is close to 1/exp(dist) or exp(-dist). Google "Beer's Law". From here,
vec3 Absorbance = WaterColor * WaterDensity * -WaterDepth;
vec3 Transmittance = exp(Absorbance);
A great way to find distances through something is to render the back faces (or seabed/water floor) with additive blending using a shader that draws distance to a floating point texture. Then switch to subtractive blending and render all the front faces (or water surface). You're left with a texture containing distances/depth for the above equation.
Volume Rendering
Combining the two ideas, the material is both a transparent solid but the colour (and maybe density) varies throughout the volume. This starts to get pretty complicated if you have large amounts of data and want it to be fast. A straight forward way to render this is to numerically integrate a ray through the 3D texture (or procedural function, whatever you're using), at the same time applying the absorption function. A basic brute force Euler integration might start a ray for each pixel on the near plane, then march forwards at even distances. Over each step while you march you assume the colour remains constant and apply absorption, keeping track of how much light you have left. A quick google brings up this.
This seems related to looking through what's called "participating media". On the less extreme end, you'd have light fog, or smoky haze. In the middle could be, say, dirty water. And the extreme case would be your head-in-the-wall example.
Doing this in a physically accurate way isn't trivial, because the darkening effect is more pronounced when the thickness of the media is greater.
But you can fake this by making some assumptions and giving the interior geometry (under the water or inside the wall) darker by reduced lighting or using darker colors. If you care about the depth effect, look at OpenGL and fog.
For underwater, you can make the back side of the water a semi-transparent color that causes stuff above it to have a suitable change in color.
If you really want to go nuts with accuracy, look at Kajia's Rendering Equation. That covers everything (including stuff that glows), but generally needs simplification and approximations to be more useful.

OpenGL: Why triangles are chosen as basic building blocks?

I am starting openGL. Not able to understand why everything in graphics starts from triangles. Every article that I read says entire graphics rests on triangles.
What is the reason for choosing such a shape as a basic building block? I though square or circle would be much better and is logical because of the symmetry properties.
Great question. It's because triangles are the only polygons that can approximate other shapes while also being guaranteed to lie in a plane, which means they have well-defined and easy-to-compute surfaces.

can vertices coincide in convex polygons?

I am new to openGL and I am reading the redbook. Now, as an exercise I want to manually draw a sphere. For that I am dividing the sphere into slices and stacks, and thus I get multiple rectangles, but near the poles of the sphere I get triangles. (hope this was clear what I am doing). Now I know that if you draw a polygon with GL_POLYGON and it happens to intersect itself, the behavior is undefined. My question is this: given three points v1, v2, v3 which are not on one line, is it undefined behavior to do this:
glBegin(GL_POLYGON)
vertex v1
vertex v1
vertex v2
vertex v3
glEnd();
This may be combining two unrelated questions into one, but I am wondering also this: if I choose to divide the rectangles in my sphere routine into triangles, does it matter how I do it, that is, by which diagonal I divide the rect into two triangles? I am guessing that for drawing a single-colored sphere it won't matter, but I don't know about textures, shaders, lighting etc.
When I was doing openGL stuff, I quickly stuck to using just triangles. They are special in that a triangle is not ambiguous in any way.
You example though, I would imagine this will work, though probably with artefacts.
how you split a rectangle shouldn't matter, just as long as you pay attention to which way your triangles are wound, which way you define the points as this is what dictates their front and back.
But definitely stick to triangles, images these four points of a square
(0,0,0) (1,0,0) (1,0,1) (0,0,1)
Fairly easy to see it is a flat square, but what if I change them to
(0,1,0) (1,0,0) (1,1,1) (0,0,1)
what do you have now? it could be drawn like a valley or like a hill. if I defined this shape with triangles, you know exactly what I am describing
(0,1,0) (1,0,0) (1,1,1)
(1,1,1) (1,0,1) (0,1,0)
A hill like shape
ok... so I side tracked a little here... my point is, I don't know what your code would do in practice, but I don't think you should use it any way. and how you split up rectangles shouldn't matter as long as your triangles are described the right way around.
This is no problem whatsoever. OpenGL always has to be able to deal with the possibility of rasterising geometry where multiple vertices fall in the same location since even different input points may end up as the same output point depending on your modelview and projection matrices (or your geometry and/or vertex shader if you're on the programmable pipeline). It is designed to deal in the mathematically correct way under a wide variety of edge cases.
OpenGL's primary test for whether to paint a pixel with geometry is whether its centre falls within the mathematical bounds of the primitive being drawn*. So OpenGL can render polygons that paint non-continuous sets of pixels (which generally happens when they become almost vanishingly thin) or that paint no pixels whatsoever (which tends to be when they end up really small, but they may technically be of arbitrarily large size as long as they squeeze between pixel centres).
The exact tests used at the hardware level may vary from vendor to vendor and are guaranteed to be correct only for geometry that is convex on screen — which is why most people say stick to triangles, since they're unavoidably convex.
(*) a separate screen-oriented test being applied to pixels exactly on a boundary to ensure they're attributed to only exactly one polygon where polygons meet along a common edge

Can someone describe the algorithm used by Ken Silverman's Voxlap engine?

From what I gathered he used sparse voxel octrees and raycasting. It doesn't seem like he used opengl or direct3d and when I look at the game Voxelstein it appears that miniature cubes are actually being drawn instead of just a bunch of 2d square. Which caught me off guard I'm not sure how he is doing that without opengl or direct3d.
I tried to read through the source code but it was difficult for me to understand what was going on. I would like to implement something similar and would like the algorithm to do so.
I'm interested in how he performed rendering, culling, occlusion, and lighting. Any help is appreciated.
The algorithm is closer to ray-casting than ray-tracing. You can get an explanation from Ken Silverman himself here:
https://web.archive.org/web/20120321063223/http://www.jonof.id.au/forum/index.php?topic=30.0
In short: on a grid, store an rle list of surface voxels for each x,y stack of voxels (if z means 'up'). Assuming 4 degrees of freedom, ray-cast across it for each vertical line on the screen, and maintain a list of visible spans which is clipped as each cube is drawn. For 6 degrees of freedom, do something similar but with scanlines which are tilted in screenspace.
I didn't look at the algorithm itself, but I can tell the following based off the screenshots:
it appears that miniature cubes are actually being drawn instead of just a bunch of 2d square
Yep, that's how ray-tracing works. It doesn't draw 2d squares, it traces rays. If you trace your rays against many miniature cubes, you'll see many miniature cubes. The scene is represented by many miniature cubes (voxels), hence you see them when you look up close. It would be nice to actually smoothen the data somehow (trace against smoothed energy function) to make them look smoother.
I'm interested in how he performed rendering
by ray-tracing
culling
no need for culling when ray-tracing, particularly in a voxel scene. As you move along the ray you check only the voxels that the ray intersects.
occlusion
voxel-voxel occlusion is handled naturally by ray-tracing; it would return the first voxel hit, which is the closest. If you draw sprites you can use a Z-buffer generated by the ray-tracer.
and lighting
It's possible to approximate the local normal by looking at nearby cells and looking which are occupied and which are not. Then performing the lighting calculation. Alternatively each voxel can store the normal along with its color or other material properties.