I am new to openGL and I am reading the redbook. Now, as an exercise I want to manually draw a sphere. For that I am dividing the sphere into slices and stacks, and thus I get multiple rectangles, but near the poles of the sphere I get triangles. (hope this was clear what I am doing). Now I know that if you draw a polygon with GL_POLYGON and it happens to intersect itself, the behavior is undefined. My question is this: given three points v1, v2, v3 which are not on one line, is it undefined behavior to do this:
glBegin(GL_POLYGON)
vertex v1
vertex v1
vertex v2
vertex v3
glEnd();
This may be combining two unrelated questions into one, but I am wondering also this: if I choose to divide the rectangles in my sphere routine into triangles, does it matter how I do it, that is, by which diagonal I divide the rect into two triangles? I am guessing that for drawing a single-colored sphere it won't matter, but I don't know about textures, shaders, lighting etc.
When I was doing openGL stuff, I quickly stuck to using just triangles. They are special in that a triangle is not ambiguous in any way.
You example though, I would imagine this will work, though probably with artefacts.
how you split a rectangle shouldn't matter, just as long as you pay attention to which way your triangles are wound, which way you define the points as this is what dictates their front and back.
But definitely stick to triangles, images these four points of a square
(0,0,0) (1,0,0) (1,0,1) (0,0,1)
Fairly easy to see it is a flat square, but what if I change them to
(0,1,0) (1,0,0) (1,1,1) (0,0,1)
what do you have now? it could be drawn like a valley or like a hill. if I defined this shape with triangles, you know exactly what I am describing
(0,1,0) (1,0,0) (1,1,1)
(1,1,1) (1,0,1) (0,1,0)
A hill like shape
ok... so I side tracked a little here... my point is, I don't know what your code would do in practice, but I don't think you should use it any way. and how you split up rectangles shouldn't matter as long as your triangles are described the right way around.
This is no problem whatsoever. OpenGL always has to be able to deal with the possibility of rasterising geometry where multiple vertices fall in the same location since even different input points may end up as the same output point depending on your modelview and projection matrices (or your geometry and/or vertex shader if you're on the programmable pipeline). It is designed to deal in the mathematically correct way under a wide variety of edge cases.
OpenGL's primary test for whether to paint a pixel with geometry is whether its centre falls within the mathematical bounds of the primitive being drawn*. So OpenGL can render polygons that paint non-continuous sets of pixels (which generally happens when they become almost vanishingly thin) or that paint no pixels whatsoever (which tends to be when they end up really small, but they may technically be of arbitrarily large size as long as they squeeze between pixel centres).
The exact tests used at the hardware level may vary from vendor to vendor and are guaranteed to be correct only for geometry that is convex on screen — which is why most people say stick to triangles, since they're unavoidably convex.
(*) a separate screen-oriented test being applied to pixels exactly on a boundary to ensure they're attributed to only exactly one polygon where polygons meet along a common edge
Related
I am looking for a way to "fill" three-dimensional geometry with color, and quite possibly a texture at some time later on.
Suppose for a moment that you could physically phase your head into a concrete wall, logically you would see only darkness. In OpenGL, however, when you do this the world is naturally hollow and transparent due to culling and because of how the geometry is drawn. I want to simulate the darkness/color/texture within it instead.
I know some games do this by overlaying a texture/color directly over the hud--therefore blinding the player.
Is there another way to do this, though? Suppose the player is standing half in water; they can partially see below the waves. How would you fill it to prevent them from being able to see clearly below what is now half of their screen?
What is this concept even called?
A problem with the texture-in-front-of-the-camera method is a texture is 2D but you want to visualize a slice of a 3D volume. For the first thing you talk about, the head-inside-a-wall idea, I'll point you to "3D/volume texturing". For standing-half-in-water, you're after "volume rendering" with "absorption" (discussed by #user3670102).
3D texturing
The general idea here is you have some function that defines a colour everywhere in a 3D space, not just on a surface (as with regular texture mapping). This is nice because you can put geometry anywhere and colour it in the fragment shader based on the 3D position. Think of taking a slice through the volume and looking at the intersection colour.
For the head-in-a-wall effect you could draw a full screen polygon in front of the player (right on the near clipping plane, although you might want to push this forwards a bit so its not too small) and colour it based on a 3D function. Now it'll look properly solid and move ad the player does and not like you've cheaply stuck a texture over the screen.
The actual function could be defined with a 3D texture but that's very memory intensive. Instead, you could look into either procedural 3D colour (a procedural wood or brick shader is pretty common as an example). Even assuming a 2D texture is "extruded" through the volume will work, or better yet weight 3 textures (one for each axis) based on the angle of the intersection/surface you're drawing on.
Detecting an intersection with the geometry and the near clipping plane is probably the hardest bit here. If I were you I'd look at tricks with the z-buffer and make sure to draw everything as solid non-self-intersecting geometry. A simple idea might be to draw back faces only after drawing everything with front faces. If you can see back faces that part of the near plane must be inside something. For these pixels you could calculate the near clipping plane position in world space and apply a 3D texture. Though I suspect there are faster ways than drawing everything twice.
In reality there would probably be no light getting to what you see and it should be black, but I guess just ignore this and render the colour directly, unlit.
Absorption
This sounds way harder than it actually is. If you have some transparent solid that's all the one colour ("homogeneous") then it removes light the further light has to travel through it. Think of many alpha-transparent surfaces, take the limit and you have an exponential. The light remaining is close to 1/exp(dist) or exp(-dist). Google "Beer's Law". From here,
vec3 Absorbance = WaterColor * WaterDensity * -WaterDepth;
vec3 Transmittance = exp(Absorbance);
A great way to find distances through something is to render the back faces (or seabed/water floor) with additive blending using a shader that draws distance to a floating point texture. Then switch to subtractive blending and render all the front faces (or water surface). You're left with a texture containing distances/depth for the above equation.
Volume Rendering
Combining the two ideas, the material is both a transparent solid but the colour (and maybe density) varies throughout the volume. This starts to get pretty complicated if you have large amounts of data and want it to be fast. A straight forward way to render this is to numerically integrate a ray through the 3D texture (or procedural function, whatever you're using), at the same time applying the absorption function. A basic brute force Euler integration might start a ray for each pixel on the near plane, then march forwards at even distances. Over each step while you march you assume the colour remains constant and apply absorption, keeping track of how much light you have left. A quick google brings up this.
This seems related to looking through what's called "participating media". On the less extreme end, you'd have light fog, or smoky haze. In the middle could be, say, dirty water. And the extreme case would be your head-in-the-wall example.
Doing this in a physically accurate way isn't trivial, because the darkening effect is more pronounced when the thickness of the media is greater.
But you can fake this by making some assumptions and giving the interior geometry (under the water or inside the wall) darker by reduced lighting or using darker colors. If you care about the depth effect, look at OpenGL and fog.
For underwater, you can make the back side of the water a semi-transparent color that causes stuff above it to have a suitable change in color.
If you really want to go nuts with accuracy, look at Kajia's Rendering Equation. That covers everything (including stuff that glows), but generally needs simplification and approximations to be more useful.
I have two quads for which I need to find the normal.The co-ordinates are as follows
for quad 1:
(-2,1.25,-1)
(-2,2.2,0)
(1,2.2,0)
(2,1.25,-1)
I have got the normal as (0,.73,-.69)
for quad 2:
(-2,2.2,0)
(2,2.2,0)
(2,1.25,1)
(-2,1.25,1)
normal:(0,.73,.69)
I have already calculated the normals.Can someone please confirm whether these normals are correct?
Also I read about normal pointing inwards and outwards..would someone explain that concept to me?
Your normals basically look correct. For the first quad, I get:
(0.0, 0.725, -0.689)
For the second one:
(0.0, -0.725, -0.689)
As you can see, I got the opposite sign for the second normal. Which leads directly to the second part of your question.
The term "outwards" does not really make sense for a isolated quad. It is mostly applied to closed shapes, where it should make intuitive sense. Picture a sphere, with a normal vector drawn starting at a point on the sphere. The normal pointing "outwards" means that it points away from the center of the cube, which means that it points to the outside. "inwards" is then of course the opposite, where the normal points towards the center of the sphere, or to the inside of the shape.
There's another way of looking at it, since normals are mostly used for lighting calculations. The normals need to point to the side of the surface that you want to see lighted. Most often, you look at shapes from the outside, so you want the outside lighted. Which means that you mostly want the normals pointing outwards. If you have open surfaces that need to be lighted when viewed from either side, there are slightly more complex lighting calculations that can handle that, which are typically found under "two-sided lighting".
There's a related concept that is also important here, which is the "winding order". It defines if the vertices are arranged clockwise or counter-clockwise when viewing them from a certain direction. OpenGL uses the winding order to decide if a triangle faces the viewer. Again, you care about having the desired winding order when looking at the surface from the outside, or more generally from the side you want to see when you display the surface. OpenGL uses counter-clockwise winding by default, so you want counter-clockwise winding when looking at a surface from the side you want to be visible, which for closed shapes is mostly from the outside. You can often get away with the winding order being "wrong" if you don't eliminate backwards facing triangles, which is done with glEnable(GL_CULL_FACE). But in any case, you can save yourself from running into problems later if you always use a consisting winding order for your primitives.
This leads us back to the normal calculation. Since only the sign ended up different, none of our calculations are technically wrong. I assumed that the quads used counter-clockwise winding, which means that I see the "outside" of the quad from the direction where the vertices appear in counter-clockwise order. Since I also want the normals pointing towards the outside, I calculated the normals that way. In other words, with the normal I calculated, if you move away from the quad in direction of the normal, and then look back at the quad, the vertices would be in counter-clockwise order.
I've been learning OpenGL, and the one topic that continues to baffle me is the far clipping plane. While I can understand the reasoning behind the near clipping plane, and the side clipping planes (which never have any real effect because objects outside them would never be rendered anyway), the far clipping plane seems only to be an annoyance.
Since those behind OpenGL have obviously thought this through, I know there must be something I am missing. Why does OpenGL have a far clipping plane? More importantly, because you cannot turn it off, what are the recommended idioms and practices to use when drawing things at huge distances (for objects such as stars thousands of units away in a space game, a skybox, etc.)? Are you expected just to make the clipping plane very far away, or is there a more elegant solution? How is this done in production software?
The only reason is depth-precision. Since you only have a limited number of bits in the depth buffer, you can also just represent a finite amount of depth with it.
However, you can set the far plane to infinitely far away: See this. It just won't work very well with the depth buffer - you will see a lot of artifacts if you have occlusion far away.
So since this revolves around the depth buffer, you won't have a problem dealing with further-away stuff, as long as you don't use it. For example, a common technique is to render the scene in "slabs" that each only use the depth buffer internally (for all the stuff in one slab) but some form of painter's algorithm externally (for the slabs, so you draw the furthest one first)
Why does OpenGL have a far clipping plane?
Because computers are finite.
There are generally two ways to attempt to deal with this. One way is to construct the projection by taking the limit as z-far approaches infinity. This will converge on finite values, but it can play havoc with your depth precision for distant objects.
An alternative (if you're willing to have objects beyond a certain distance fail to depth-test correctly at all) is to turn on depth clamping with glEnable(GL_DEPTH_CLAMP). This will prevent clipping against the near and far planes; it's just that any fragments that would have normalized z coordinates outside of the [-1, 1] range will be clamped to that range. As previously indicated, it screws up depth tests between fragments that are being clamped, but usually those objects are far away.
It's just "the fact" that OpenGL depth test was performed in Window Space Coordinates (Normalized device coordinates in [-1,1]^3. With extra scaling glViewport and glDepthRange).
So from my point of view it's one of the design point of view of the OpenGL library.
One of approach to eliminate this OpenGL extension/OpenGL core functionality https://www.opengl.org/registry/specs/ARB/depth_clamp.txt if it is available in your OpenGL version.
I want to describe that in the perspective projection there is nothing about "far clipping plane".
3.1 For perspective projection you need to setup point \vec{c} as center of projection and plane on which projection will be performed. Let's call it
image plane T: (\vec{r}-\vec{r_0},\vec{n})
3.2 Let's assume that projected plane T split arbitary point \vec{r} and \vec{c} central of projection. In other case \vec{r} and \vec{c} are in one hafe-space and point \vec{r} should be discarded.
3.4 The idea of projection is to find intersection \vec{i} with plane T
\vec{i}=(1-t)\vec{c}+t\vec{r}
3.5 As it is
(\vec{i}-\vec{r_0},\vec{n})=0
=>
( (1-t)\vec{c}+t\vec{r}-\vec{r_0},\vec{n})=0
=>
( \vec{c}+t(\vec{r}-\vec{c})-\vec{r_0},\vec{n})=0
3.6. From "3.5" derived t can be subtitute into "3.4" and you will receive projection into plane T.
3.7. After projection you point will lie in the plane. But if assume that image plane is parallel to OXY plane, then I can suggest to use original "depth" for point after projection.
So from geometry point of view it is possible not to use far plane at all. As also not to use [-1,1]^3 model explicitly at all.
p.s. I don't know how to type latex formulas in correct way, s.t. they will be rendered.
Is there a formula that generates a set of coordinates of triangles whose vertices are located on a sphere?
I am probably looking for something that does something similar to gluSphere. Yet, I need to color the different triangles in specfic colors so that it seems I can't use gluSphere.
Also: I do understand that gluSphere draws edges along lines with equal longitudes and lattitudes which entails the triangles being small at the poles compared to their size at the equator. Now, if such a formula would generate the triangles such that their difference in size is minimized, that would be great.
To calculate the normals and the uv map.
Fortunately there is an amazing trick for calculating the normals, on a sphere. If you think about it, the normals on a sphere are indeed nothing more than the direction from the centre of the sphere, to that point!! Furthermore, if you think it through, that means the normals literally equal the point! i.e., it's the same vector! - just don't forget to normalise the length, for the normal.
You can win bar bets on that one: "is there a shape where all the normals happen to be exactly ... equal to the vertices?" At first glance you'd think, that's impossible, no such coincidental shape could exist. But of course the answer is simply "a sphere with radius one!" Heh!
Regarding the UVs. It is relatively easy on a sphere, assuming you're projecting to 2D in the "obvious" manner, a "rectangle-style" map projection. In that case the u and v is basically just the longitude / latitude of any point, normalised to 0,1.
Hope it helps!
Here's the all-time-classic web page that beautifully explains how to build an icosphere .. http://blog.andreaskahler.com/2009/06/creating-icosphere-mesh-in-code.html
Start with a unit icosahedron. Then apply muliple homogenous subdivisions of the triangles, normalizing the resulting vertices distance to the origin.
From what I gathered he used sparse voxel octrees and raycasting. It doesn't seem like he used opengl or direct3d and when I look at the game Voxelstein it appears that miniature cubes are actually being drawn instead of just a bunch of 2d square. Which caught me off guard I'm not sure how he is doing that without opengl or direct3d.
I tried to read through the source code but it was difficult for me to understand what was going on. I would like to implement something similar and would like the algorithm to do so.
I'm interested in how he performed rendering, culling, occlusion, and lighting. Any help is appreciated.
The algorithm is closer to ray-casting than ray-tracing. You can get an explanation from Ken Silverman himself here:
https://web.archive.org/web/20120321063223/http://www.jonof.id.au/forum/index.php?topic=30.0
In short: on a grid, store an rle list of surface voxels for each x,y stack of voxels (if z means 'up'). Assuming 4 degrees of freedom, ray-cast across it for each vertical line on the screen, and maintain a list of visible spans which is clipped as each cube is drawn. For 6 degrees of freedom, do something similar but with scanlines which are tilted in screenspace.
I didn't look at the algorithm itself, but I can tell the following based off the screenshots:
it appears that miniature cubes are actually being drawn instead of just a bunch of 2d square
Yep, that's how ray-tracing works. It doesn't draw 2d squares, it traces rays. If you trace your rays against many miniature cubes, you'll see many miniature cubes. The scene is represented by many miniature cubes (voxels), hence you see them when you look up close. It would be nice to actually smoothen the data somehow (trace against smoothed energy function) to make them look smoother.
I'm interested in how he performed rendering
by ray-tracing
culling
no need for culling when ray-tracing, particularly in a voxel scene. As you move along the ray you check only the voxels that the ray intersects.
occlusion
voxel-voxel occlusion is handled naturally by ray-tracing; it would return the first voxel hit, which is the closest. If you draw sprites you can use a Z-buffer generated by the ray-tracer.
and lighting
It's possible to approximate the local normal by looking at nearby cells and looking which are occupied and which are not. Then performing the lighting calculation. Alternatively each voxel can store the normal along with its color or other material properties.